Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
As the use of artificial intelligence – Benign and opposition – speed speed increases, they are explaining potentially harmful answers.
Pixdeluxe | E + | Getty Images
As the use of artificial intelligence – Benign and opposition – speed speed increases, they are explaining potentially harmful answers. These include hate speech, Copyright violations or Sexual content.
The creation of these uninjured behavior is sufficient for lack of regulations and the AI model test, researchers said CNBC.
Achieving machine learning models to achieve the way he intended to do so, Javier Rando said the AI researcher.
“After the answer was investigated for almost 15 years, we don’t know how to do this, and it doesn’t seem to improve,” Rando, CNBC said, CNBC said Rando.
However, there are some ways to assess the risks of air, such as Red group. Practice is testing individuals and testing artificial intelligence systems to discover and identify any potential damage – Clyberescurity Circles is common.
Shayne Longpre, AI and Policy and Policy Researcher Data origin initiativeToday he has noted that there is insufficient person working in red groups.
While the AI Startups were using the primary evaluators or to test the patterns that hired secondary, third-party users, journalists, researchers and ethical hackers would lead to stronger assessment Longpre and the role published by researchers.
“The mistakes of the systems needed, doctors needed, to represent scientists who are specialized subjects, whether or not the ordinary person could probably be sufficient,” Longpe said.
AI systems are some recommendations, incentives and ways to expand information about these “errors” in systems.
With this practice successfully taken it well in other sectors such as software security, “we need ain now,” he added longpre.
Marrying with government, politics and other tools, they would ensure better understanding of the risks raised by AI tools and users.
Project Moonshot is a such approach, combining technical solutions with policy mechanisms. The Singapore Infocomm Media Development Authority is based on the Moonshot language model assessment tool based on industrial agents based on IBM and Boston Computer robots.
The tool tool integrates the reference, red group and the basics of tests. There is also an evaluation mechanism to ensure that AI Startups can be reliable and users, AIB Kumar, head of customer engineering for IBM Asia Pacific data and AI said CNBC.
Evaluation A Continuous process Before and after the expansion of the models, Kumar said that the response of the name tool was confused.
“Many steping took it as a platform Open source, And they began to take advantage of that. But I think, you know, we can do much more. “
Moving forward, project moonshot wants to access the use of industry use for specific cases and allow multi-group and multicultural grouping.
Pierre Alquier, Statistics professor at Asia-Pacific Essec Business School said technology companies today Release in a hurry their final AI models without proper evaluation.
“When a pharmaceutical company designs a new drug, they need tests and very serious evidence, it is not useful and harmful before the government is accepted,” it has indicated that the similar process is in the aviation sector.
The AI models must comply with a strict set of conditions before accepting, Alquier added. Developing those who are designed to make a more specific tasks would easier to prevent and control his misuse, said Alquier.
“Llms can do too many things, but they are not aimed at specific enough tasks,” he said. As a result, “the number of possible uses possible is too large to anticipate all developers.”
Such wide models are defined that they are difficult and safe Rando This research He took part.
Tech companies must therefore avoid “better than their defenses,” Randok said.