Responsibility

Developing safer language models

Large language models help us build powerful systems that augment human capabilities, but they can also be used in ways that cause harm. We believe language model providers must develop their technologies responsibly. This means proactively working to build safer products and accepting a duty of care to users, the environment, and society.


Responsibility is an active research area at Cohere that requires interdisciplinary collaboration.


We’ve invested in technical and non-technical measures to mitigate potential harm and make our development processes transparent. We've also established an advisory Responsibility Council empowered to inform our product and business decisions.


We are excited about the potential for algorithmic language understanding to improve accessibility, enable human computer interaction, and allow for broader human-to-human dialogue. If you want to use our API to help create a better world, or stay informed of new developments, let us know.



Responsibility principles


Model the world as we hope it will become.


Anticipate risks and listen to those affected.


Build in mitigation efforts commensurate with expected and actual impacts.


Continually assess
the societal impacts of our work.



Our process


We believe that no technology can be made absolutely safe; machine learning is no exception. This requires anticipating and accounting for risks during our development process. We run adversarial attacks, filter our training data for harmful text, and measure our models against safety research benchmarks. We also evaluate evolving risks with monitoring tools designed to identify harmful model outputs.


We recognize that misuse of powerful language models will disproportionately impact the most vulnerable, so we aim to balance safety considerations and equity of access. This is an ongoing process. As we release early versions of our technology, we’ll work closely with our partners and users to ensure its safe and responsible use.

is hosted on Google Cloud Platform (GCP) databases. These databases are all located in the United States. Please reference the above vendor specific documentation linked above for more information.



Accountability


Responsibility means more than getting the technology right. It’s about who makes the decisions, communicating risks upfront, and the way we’re held accountable. To help API users anticipate risks, we have published data statements (public documentation about the data used to train our models) and model cards (benchmarks and information about where our technology is and is not working well) for our datasets and models in our documentation here.


We believe that good decisions don’t happen in a vacuum. Responsibility must be baked into the culture and norms of the machine learning ecosystem, and we’re committed to sharing knowledge and best practices. We’ll be supporting workshops around responsible use—if you’re developing language models or working to mitigate their risks, please reach out.


Our advisory Responsibility Council is made up of external experts who will advise our business and research practices. They have visibility into our customer development pipeline and engineering processes, and agency to ask difficult questions around the permissible uses of our technologies.



Usage Guidelines


We require our users to abide by Cohere’s Usage Guidelines. Access will be revoked if these terms are not followed. If you spot the Cohere Platform being used in a harmful or otherwise unproductive way, please report it to us.