
AI monitoring agent eyed to prevent harmful output in real-world scenarios
AutoGPT's AI monitoring tool mitigates risks by identifying harmful outputs in real-time for large language models, including OpenAI's ChatGPT and Google's Bard.
AutoGPT's AI monitoring tool mitigates risks by identifying harmful outputs in real-time for large language models, including OpenAI's ChatGPT and Google's Bard.
The News Media Alliance submitted a white paper and other accompanying documents to the United States Copyright Office, raising alarm over the trend of copyright violation by AI chatbots.
Amazon's CEO Andy Jassy sees artificial intelligence (AI) as the key to future earnings growth, with a $1.25 billion investment in Anthropic AI and plans for generative AI expansion.
AI models tend to prioritize user desires over truth, with reinforcement learning from human feedback potentially responsible, prompting researchers to explore alternative training methods.
There is potentially a better way to ensure copyright protection in this generative AI age, outside of the long and expensive litigious route to the Supreme Court, through blockchain technology.
US Space Force's ban on generative AI tools due to data security risks reflects the growing trend of organizations restricting their use over data concerns.
Alibaba's Tongyi Qianwen AI model, with 7 billion parameters, challenges ChatGPT and Bard, aiming to empower enterprises and attract diverse users with its upcoming open-source release.
Meta launches free to use Code Llama AI project with two new variations—Code Llama Python and Code Llama-Instruct—as it seeks to aid programmers in workflow improvement.
With the help of AI, Google and iCAD collaborate to improve their healthcare services, like 2D breast mammography and cancer detection, while working to ease radiologists' workload.
Alibaba noted that enterprises can leverage the capabilities of the open-source LLMs to “code, model weights” while providing extensive documentation for free.
The commercial rollout of Meta's large language model, LLaMa, will be customizable for enterprises and allow developers to create their own software, according to a Financial Times report.
The Federal Trade Commission is probing whether OpenAI has breached customer protection laws or caused reputational harm to users, as well as its data security and privacy measures.