Updating AI principles
Since we first published our AI principles in 2018, the technology has evolved rapidly. Billions people use AI in their everyday lives. AI has become a general technology and a platform that countless organizations and individuals use to build applications. It has been moved from a niche research topic in the laboratory to a technology that becomes as pervasive as mobile phones and the Internet itself; One with several advantageous uses for society and people around the world, supported by a vibrant AI ecosystem of developers.
Ordinary baseline principles are an important part of this development. In addition to AI companies and academic institutions, we are encouraged by the progress we have seen on AI principles globally. G7 and the international organization for standardization as well as individual democratic nations have all published a framework for guidance on secure development and use of AI. In increasing which. Our experience and research in recent years, together with threat information, expertise and best practices we have shared with other AI companies, have elaborated on our understanding of AI’s potential and risks.
There is a global competition that takes place for AI management within an increasingly complex geopolitical landscape. We believe that democracies should lead to AI development, guided by core values ​​such as freedom, equality and respect for human rights. And we believe that companies, governments and organizations that share these values ​​should work together to create AI that protects people, promotes global growth and supports national security.
With that background, we update our own AI principles to focus on three core principles:
- Fed Innovation: We develop AI to help, strengthen and inspire people in almost every field of human endeavor, drive economic progress and improve lives, enable scientific breakthroughs and help tackle humanity’s greatest challenges.
- Responsible Development and Implementation: Because we understand that AI, as an ever-growing transformative technology, constitutes new complexities and risks, we consider it an imperative to pursue AI responsible through the development and implementation life cycle-from design to testing to implementation for iteration learning as AI progress and progress and user develops.
- Collaborative progress together: We learn from others and build technology that allows others to take advantage of AI positively.
You can read our full AI principles at AI.Google.
Guided by our AI principles, we will continue to focus on AI research and applications that comply with our mission, our scientific focus and our areas of expertise and remain in accordance with widely accepted principles of international law and human rights -always evaluated specifically Work by carefully assessing whether the benefits significantly outweigh potential risks. We also take into account whether our engagements require tailor -made research and development or are dependent on general purposes, widely available technology. These assessments are especially important as AI is increasingly developed by several organizations and governments to uses in areas such as healthcare, science, robotics, cybersecurity, transport, national security, energy, climate and more.
In addition to the principles, we have, of course, continue to have specific product policies and clear conditions of use that contain prohibitions as the illegal use of our services.