Cornell Chronicle: ” “It pays to be brief when asking artificial intelligence tools to mine massive datasets for insights, according to Cornell researcher Immanuel Trummer. That’s why Trummer, associate professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, has developed a new computational system, called Schemonic, that cuts the costs of using large language models (LLMs) such as ChatGPT and Google Bard by combing large datasets and generating what amounts to “CliffsNotes” versions of data that the models can understand. Using Schemonic cuts costs of using LLMs as much as tenfold, Trummer said. “The monetary fees associated with using large language models are non-negligible,” said Trummer, the author of “Generating Succinct Descriptions of Database Schemata for Cost-Efficient Prompting of Large Language Models,” which was presented at the 50th Conference of Very Large Databases (VLDB) held Aug. 26-30 in Guangzhou, China. “I think it’s a problem everyone who is using these models has.”
Sorry, comments are closed for this post.