Why Actuaries Need a Central Data Hub to Transform Loss Reserving

Data defines the pace and quality of the loss reserving process, but gaining access to it is often problematic for many actuaries. Legacy systems are typically ill-suited to the data-intensive requirements of a modern reserving process. This bottleneck is often a prime constraint leading to the rushed and sometimes fitful pace of the quarterly reserve review. In the end, sometimes ‘just getting it done’ becomes the focus of actuarial analysis when much more is needed and possible.
Actuarial departments are under increasing pressure from regulators, rating agencies, and audit committees to deliver more sophisticated loss and reserve estimates under shorter timeframes. Conventional methods of accessing data, however, can stand in the way.
Today’s reserving process is often riddled by delays and interruptions that begin with a request for the data, typically to IT or the claims departments. This is usually followed by the need to verify that data and adjustments flow through a maze of spreadsheets, reconcile to the original source, and are consistent with prior analyses. Time-consuming and labor-intensive, these adjusting and reconciling tasks cause delays in the reserving process and misallocate actuarial resources that could be better utilized with a reengineered system.
