The project "Artificial intelligence in private pension provision: Opportunities and Risks" is a cooperation between the KU under the direction of Streich and the Technical University of Dresden under the direction of Prof. Dr. Lars Hornuf. At the interface of finance, behavioral science and business informatics, Streich and Hornuf focus on large language models (LLMs) such as ChatGPT. Such AI applications can not only understand texts in human language and generate them themselves, but can also process increasingly complex tasks. From spring 2025, the economists will focus on the use of LLMs to generate investment recommendations for private pension provision in the context of their DFG-funded project.
Great potential for AI-supported advice
As the researchers emphasize, this is an area with great potential for the use of AI. With the sustainability of statutory pension provision decreasing, people increasingly have to rely on capital market-based investments. At the same time, studies show that financial expertise and the corresponding ratio of shares invested in is low: Only around 18 percent of German adults own shares. Those who invest sometimes make costly mistakes – and even human investment advisors can only help to a certain extent. "A study of Canadian investment advisors showed that they do not tailor their portfolio recommendations sufficiently to their clients' circumstances and essentially recommend their own investment strategy that is not always ideal", says David Streich, who holds the Assistant Professorship of Digital Finance at the Ingolstadt School of Management. In addition, there are personal interests – especially among bank advisors, who have a financial incentive to sell their own products.
On the other hand, initial studies suggested that LLMs might be able to make suitable investment recommendations. "Unlike human investors, LLMs are able to process large volumes of unstructured information relevant to the capital market and translate it into an investment strategy", says David Streich. Since security prices react to the publication of new information, this is an advantage of AI tools.
How useful the AI-generated investment recommendations actually turn out to be in practice depends on their quality and their acceptance by users. Which factors play a role in this is a central question investigated by the DFG-funded project. "Among other things, we are planning on observing how the interaction with AI tools works in laboratory experiments", explains KU Professor Streich. It is to be expected that transparency and an understanding of the AI's decision-making processes will be particularly important. It is also interesting to see how acceptance and response time are related. A response that is too quick could come across as non-human and therefore possibly negative – but on the other hand it could also come across as particularly professional.
What if AI replicates human errors?
In other sub-projects, the economists are focusing on the potential risks of using AI. In addition to potential problems in the area of data protection, algorithmic distortions are a particular concern. "LLMs are trained using texts written by humans. This means that there is a risk that distortions contained in the training data set will be adopted and replicated", explains David Streich. In preliminary results, Streich and Hornuf found, for example, that LLMs overweighted domestic shares and thus reproduced the so-called "home bias". In a second step, the researchers will also investigate how users react to biased recommendations. Given the widespread preference for domestic shares, an unbiased recommendation may even meet with less acceptance than a biased recommendation in line with investors' habits.
Whether and to what extent ChatGPT and similar AI tools will actually enable simple and cost-effective access to high-quality investment advice in the future is the big question that Streich and Hornuf want to answer over the course of the next three years.