The National Institute of Standards and Technology wants public feedback on a plan to develop guidance for how companies can implement various types of artificial intelligence systems in a secure manner.
NIST on Thursday released a concept paper about creating control overlays for securing AI systems based on the agency’s widely used SP 800-53 framework. The overlays are designed to help ensure that companies implement AI in a way that maintains the integrity and confidentiality of the technology and the data it uses in a series of different test cases.
The agency also created a Slack channel to collect community feedback on the development of the overlays.
“The advances and potential use cases for adopting artificial intelligence (AI) technologies brings both new opportunities and new cybersecurity risks,” the NIST paper said. “While modern AI systems are predominantly software, they introduce different security challenges than traditional software.”
The rapid acceleration of AI use in corporate environments has created opportunities for companies to improve workplace productivity, but it has also prompted serious concerns about whether the technology can be implemented securely.
Para leer más ingrese a:
https://www.cybersecuritydive.com/news/nist-input-control-overlays-securing-ai/757909/