Prominent players in the AI industry, including OpenAI, Microsoft, Google, and Anthropic, yesterday announced the formation of the Frontier Model Forum. The primary objective of this forum is to regulate the development of large machine learning models, with a specific focus on ensuring their safe and responsible deployment.
Termed “frontier AI models,” these cutting-edge models surpass the capabilities of existing advanced models. However, the concern lies in their potential to possess dangerous capabilities, posing significant risks to public safety, Reuters reported.
Among the most well-known applications of these models are generative AI models, such as the one powering chatbots like ChatGPT. These models can rapidly extrapolate vast amounts of data to generate responses in the form of prose, poetry, and images.
Despite the numerous use cases for such advanced AI models, several government bodies, including the European Union, and industry leaders, like OpenAI CEO Sam Altman, have emphasized the need for appropriate guardrails to mitigate the risks associated with AI.
Minister assures mandated rules in place for kids’ age-related OTT content
Govt. not considering rules for use of AI in filmmaking: Murugan
DTH revenue slide to ease to 3–4% this fiscal year: Report
At Agenda Aaj Tak, Aamir, Jaideep Ahlawat dwell on acting, Dharam
JioHotstar to invest $444mn over 5 years in South Indian content
Stephen King’s ‘The Long Walk’ makes digital debut in India
ET NOW rolls out new shows, sharpens focus on insight-led business coverage
Moneycontrol, Dezerv bring top market voices together at Wealth Summit in Mumbai
‘Dhurandhar’ rides controversies to Rs. 300cr+ BO collection 


