In a recent Securities and Exchange Commission (SEC) roundtable, industry leaders and academics discussed the slow adoption of generative artificial intelligence (AI) within financial services. Despite recognizing the potential benefits in back-office efficiency and customer-facing tasks, financial institutions are adopting these technologies at a slower pace compared to tech companies. Challenges such as high costs, difficulty in measuring return on investment (ROI), and "last mile" issues were highlighted. The conversation also delved into new risks posed by AI, necessitating updated risk management frameworks.
During the SEC-hosted roundtable discussion on March 27, executives acknowledged that while generative AI holds promise for enhancing various operations, its deployment remains in its infancy. Sarah Hammer from the Wharton School emphasized the technology's potential to streamline inefficient processes like clearing and settlement. However, Hardeep Walia of Charles Schwab noted a significant gap between the speed of development by tech firms and the cautious approach taken by financial institutions.
One major factor slowing down adoption is the substantial cost involved. Enterprises find it challenging to assess the ROI due to the expensive nature of this cutting-edge technology. Nevertheless, some panelists pointed out that costs are gradually decreasing, partly due to open-source models like DeepSeek, which help reduce expenses. Peter Slattery from MIT’s FutureTech introduced another obstacle: the "last mile" issue. Although current AI models can match about 90% of human performance, achieving full parity requires an exponential leap in quality, making complete automation unlikely in the near future.
Tyler Derr from Broadridge offered a different perspective, suggesting that the primary goal of AI implementation should not be replacement but enhancement of human capabilities. This shift in focus could alleviate some concerns regarding job displacement. The discussion further expanded to include the novel risks associated with AI. As AI assumes more responsibilities within organizations, traditional risk management strategies may fall short. Slattery, who leads MIT’s AI Risk Repository, explained that new categories of risks, such as multi-agent risks, need to be addressed. Issues like liability in collaborative AI environments pose complex questions that require reinvention of existing legal frameworks.
As enterprises navigate these challenges, robust governance policies become essential. Sarah Hammer reiterated the importance of responsible AI practices, supported by global regulatory frameworks. Tyler Derr stressed the necessity of dynamic updates to risk policies, adapting them as new use cases emerge. Collaboration with regulators, such as the SEC, was identified as crucial for successful AI integration. Just as cybersecurity demands collective effort, so too does the regulation of AI technologies.
Looking ahead, the financial sector must balance innovation with caution. By addressing cost barriers, refining risk management strategies, and fostering collaboration, institutions can harness the power of generative AI responsibly. The ongoing dialogue between industry leaders and regulators will play a pivotal role in shaping the future landscape of AI in financial services.