Back in October 2022, the White House Office of Science and Technology Policy (OSTP) issued the Biden Administration’s Blueprint for an AI Bill of Rights as a guide for equitable access and use of automated systems and artificial intelligence (AI). In 2023, the conversation and best practices—particularly in the United States, which is lagging behind the EU and others in this area—will accelerate across governments and industries. Urgency will grow as the adoption of data-driven machine learning (ML) systems increases and transforms businesses and consumer attitudes toward emerging solutions continue to evolve. 

This framework builds on the core principles for tech accountability and reform that the U.S. government has attempted to define in fits and starts for more than a decade. While recent legislation—including The Chips and Science Act—have funneled federal dollars into building cutting-edge technologies stateside, enforceable guidance on how to responsibly vet and manage new tech has not kept pace.

Many find the absence of clear federal guidance is especially glaring as data-driven solutions have skyrocketed in prominence within the enterprise space, promising more informed decision making and automated systems powered by AI and ML principles. This initial U.S. Blueprint marries themes from previous, non-U.S. legislation around data privacy with guidance from AI innovators and thought leaders, all with an emphasis on social justice and equity that many experts argue hasn’t been prioritized to date. 

What is an AI Bill of Rights?

This yet-unenforceable Blueprint offers five principles to guide organizations developing or using AI:

  1. Safe and Effective Systems: Citizens shouldn’t be exposed to untested or poorly qualified AI systems that could have unsafe outcomes—whether to individuals personally, specific communities, or to the operations leveraging individual data.
  2. Algorithmic Discrimination Protections: Simply put, AI models can’t be designed with bias-driven outcomes in mind, nor should systems be deployed that haven’t been vetted for potential discrimination.
  3. Data Privacy: Organizations must not engage in abusive data practices, nor should the use of surveillance technologies go unchecked. 
  4. Notice and Explanation: Individuals should always be informed when (and how) their data is being used and how it will affect outcomes. 
  5. Human Alternatives, Consideration, and Fallback: Individuals should not only have authority to opt-out of data collection, but there should be a human practitioner they can turn to when concerns arise. 

 Why is an AI systems and usage framework important?

While the new guidelines are delivered under the banner of AI, they’re more akin to the broad-based consumer ‘Bill of Rights’ delivered by the European Union back in 2018 via the General Data Protection Regulation (GDPR). Anyone working in tech—or, realistically, the enterprise space—in the 2010s will be familiar with GDPR, as it placed an enforceable framework around data privacy that had been startlingly absent prior to the GDPR’s original iteration, first pitched in 2016. 

 With billions in fines levied against Big Tech since GDPR became law in 2018, it remains among the only globally-significant legislative measures in place that gives individuals ownership over their personal information. (That said, EU legislators are hard at work crafting their own AI-specific regulations that build on the GDPR principles already in action.)

 Stateside, however, there have yet to be any significant data protection laws at the federal level, though California’s CCPA and similar legislation in Illinois and other states offer a template federal legislators could follow. Instead, most companies collecting data in the United States have to voluntarily adhere to ethical standards for data collection, or simply mirror their EU-mandated data practices stateside for consistency—even if GDPR can’t legally be enforced on data collected from U.S. citizens. In fact, most global enterprises simply apply the same protections and permissions to U.S. data that they do to data collected in the EU by default for this very reason. 

 This isn’t to say there are no protections for personally identifiable information (PII) in the U.S., as HIPAA and state-level legislation protect the disclosure of  individuals’ healthcare information without permission. 

 But things get tricky as both the breadth and variety of data businesses are capable of collecting data accelerates on a massive scale, alongside the number of applications for AI and ML driving modern enterprise transformation.

Your blueprint for vision AI success

Enterprises are collecting massive quantities of unstructured image and video data that needs to be annotated, trained, and managed responsibly to ensure that computer vision models are not only performing as expected, but avoiding potential ethical pitfalls. This is especially important when dealing with visual data as it pertains to humans.

 When video and image data is first collected, it is unstructured, as there are no inherent frame, image, or object descriptions or classifications with which computer vision models can “learn.” Therefore, there is an explicit human-in-the-loop (HITL) requirement to designing computer vision models from the start that calls for ongoing management and oversight to help avoid unintended consequences—like data bias—from impacting the application. Businesses need to be sure they are leveraging knowledgeable partners who can assist in the end-to-end accountability and reliability  of the computer vision model by putting responsibility at the forefront of each solution strategy.

With a unique combination of solution-centric strategy, a vision AI platform, and deep learning expertise, Plainsight empowers enterprises to realize the full value of their visual data with transformative computer vision solutions. Schedule a call today to learn more about how our models enable process improvement, innovation, and more.

 View All Blogs