After ChatGPT, Microsoft working on AI model that takes images as cues
The paper suggests that multimodal perception, or knowledge acquisition and "grounding" in the real world, is needed to move beyond ChatGPT-like capabilities to artificial general intelligence (AGI), reports ZDNet.
As the war over artificial intelligence (AI) chatbots heat up, Microsoft has unveiled Kosmos-1, a new AI model that can also respond to visual cues or images, apart from text prompts or messages.
The multimodal large language model (MLLM) can help in an array of new tasks, including image captioning, visual question answering and more. Kosmos-1 can pave the way for the next-stage beyond ChatGPT's text prompts.
"A big convergence of language, multimodal perception, action, and world modeling is a key step toward artificial general intelligence. In this work, we introduce Kosmos-1, a Multimodal Large Language Model (MLLM) that can perceive general modalities, learn in context and follow instructions," said Microsoft's AI researchers in a paper.
The paper suggests that multimodal perception, or knowledge acquisition and "grounding" in the real world, is needed to move beyond ChatGPT-like capabilities to artificial general intelligence (AGI), reports ZDNet.
"More importantly, unlocking multimodal input greatly widens the applications of language models to more high-value areas, such as multimodal machine learning, document intelligence, and robotics," the paper read.
The goal is to align perception with LLMs so that the models are able to see and talk. Experimental results showed that Kosmos-1 achieves impressive performance on language understanding, generation, and even when directly fed with document images.
It also showed good results in perception-language tasks, including multimodal dialogue, image captioning, visual question answering, and vision tasks, such as image recognition with descriptions (specifying classification via text instructions).
"We also show that MLLMs can benefit from cross-modal transfer, i.e., transfer knowledge from language to multimodal, and from multimodal to language. In addition, we introduce a dataset of Raven IQ test, which diagnoses the nonverbal reasoning capability of MLLMs," said the team.
Also Read: India's forex reserves fall to 3-month low of $560.94 billion
Also Read: Also Read: SC junks Vijay Mallya's plea challenging proceedings to declare him a fugitive
Get Latest Business News, Stock Market Updates and Videos; Check your tax outgo through Income Tax Calculator and save money through our Personal Finance coverage. Check Business Breaking News Live on Zee Business Twitter and Facebook. Subscribe on YouTube.
RECOMMENDED STORIES
Top 5 mid cap mutual funds with best SIP returns in 1 year: See how Rs 25,000 monthly investment has grown in each scheme
Exclusive: Sebi reviews authorised person regulations; discusses minimum qualification, NISM certification and deposit amount criteria
SIP Investment: Can Rs 70/day savings help you build Rs 6 crore corpus? Understand calculations for 10, 20, 30, and 40 years at 13%, 14%, and 15% return
Stocks To Buy For 2 Weeks: Axis Direct recommends buying these stocks for 5-15 day; check targets, stop losses
EPF vs SIP vs PPF: Which can help generate highest retirement corpus on Rs 11,000 monthly investment in 30 years?
12:43 AM IST