Prominent players in the AI industry, including OpenAI, Microsoft, Google, and Anthropic, yesterday announced the formation of the Frontier Model Forum. The primary objective of this forum is to regulate the development of large machine learning models, with a specific focus on ensuring their safe and responsible deployment.
Termed “frontier AI models,” these cutting-edge models surpass the capabilities of existing advanced models. However, the concern lies in their potential to possess dangerous capabilities, posing significant risks to public safety, Reuters reported.
Among the most well-known applications of these models are generative AI models, such as the one powering chatbots like ChatGPT. These models can rapidly extrapolate vast amounts of data to generate responses in the form of prose, poetry, and images.
Despite the numerous use cases for such advanced AI models, several government bodies, including the European Union, and industry leaders, like OpenAI CEO Sam Altman, have emphasized the need for appropriate guardrails to mitigate the risks associated with AI.
MIB extends by 4 weeks ban on news channels’ TRP by BARC India
Reliance eyes LEO satellite play to rival Starlink in India: ET report
FIFA offered $20mn for WC’26 broadcast rights for India market
IPL franchise Rajasthan Royals get new owners in Mittals, Poonawalla
Netflix leads India’s 2025 theatrical streaming race: Ormax study
TRAI extends submission date for satcom spectrum consultations
AAAI to mark 80 years, brings industry together on May 19
Ex-CEO Prasar Bharati Shashi Vempati named CBFC chief
Lakshvir Singh pushed limits training as hockey player for ‘Lukkhe’ 

