The AI Bill: A Critical Analysis


This is a guest essay by University of San Diego graduate student Adam Graves and not an official position of the University of San Diego.

The Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312) was introduced in the U.S. Senate on November 15, 2023, by a bipartisan group of senators: John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelley Moore Capito (R-WV), and Ben Ray Luján (D-NM). The bill seeks to establish a comprehensive framework that fosters AI innovation while enhancing transparency, accountability, and security in the development and deployment of high-impact AI systems (Artificial Intelligence Research, Innovation, and Accountability Act, 2023).

Senator Amy Klobuchar emphasized the bill’s importance, stating, “The bill aims to establish a comprehensive framework to promote innovation in artificial intelligence (AI) while enhancing transparency, accountability, and security in the development and deployment of high-impact AI systems” (Klobuchar, 2023).

Title II of the Artificial Intelligence Research, Innovation, and Accountability Act of 2023, specifically Section 201, Item 2, provides a definition of an artificial intelligence system as a “machine-based system that, for explicit and implicit objectives, infers from the input the system receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (Artificial Intelligence Research, Innovation, and Accountability Act, 2023). This recent update in the bill is a positive sign that lawmakers are actively refining the legislation as AI technology evolves and more insights are gained. However, the definition does not address a critical aspect of AI autonomy—cases where the system generates and processes its own input data. This is a significant oversight, as it raises concerns about what kind of data an AI system might produce and how it could be used. If AI models are left unchecked in their ability to create and feed their own data into subsequent decision-making processes, the risk of misinformation, bias reinforcement, or unintended harmful behaviors increases. Ensuring proper monitoring mechanisms for AI-generated input data should be a key regulatory priority to maintain accountability and transparency in AI systems.

This concern extends further with the rise of self-coding AI platforms, which present unprecedented regulatory challenges that demand immediate attention. These systems, capable of generating and modifying their own code, introduce several critical risks that are not yet adequately addressed in the current legislative frameworks:

  • Unpredictable Evolution – Self-coding AI systems can develop functionalities or behaviors that their original developers never anticipated, potentially creating new risks that fall outside existing regulations.
  • Cascade Effects – Changes made by an AI in one area of its system could trigger unintended modifications elsewhere, potentially bypassing established safety protocols or introducing unforeseen vulnerabilities.
  • Accountability Gaps – When AI systems autonomously modify their own code, traditional accountability mechanisms become inadequate, boundaries between human coders and self-machine coding becomes blurry, making it increasingly difficult to trace responsibility for errors, biases, or unintended consequences.
  • Autonomous Deployment Risks – Allowing AI systems to push code updates and system changes without human supervision poses significant risks, as errors or unintended modifications could propagate rapidly, potentially leading to catastrophic failures in critical applications.

The AI Bill has taken positive steps in refining definitions and improving regulatory oversight, but it must also evolve to address the risks posed by AI systems that generate and manipulate their own data and code. Without stringent monitoring mechanisms, these systems could operate beyond human comprehension and control, raising serious concerns about security, accountability, and long-term stability in AI-driven environments. Ensuring clear guidelines, oversight structures, and intervention protocols will be crucial in safeguarding against these emerging risks.

The Overlooked Role of DataOps in AI Accountability

A critical omission in the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 is its failure to address DataOps—the discipline that governs the end-to-end lifecycle of data within AI systems. A good read is the article “DataOps Explained: How To Not Screw It Up” (G. Willis, 2023). AI models rely on vast amounts of structured and unstructured data, but the bill provides no comprehensive framework for ensuring data quality, bias mitigation, or secure handling of these datasets. The concept of DataOps focuses on the principles of data pipeline automation, continuous monitoring, and governance to enhance the efficiency, reliability, and transparency of AI-driven decision-making (DataOps.live, 2023). Without a strong DataOps framework, AI models risk being trained on incomplete, biased, or misleading datasets, which could skew outcomes and reinforce systemic biases.

The bill’s current treatment of data and specifically data bias is particularly inadequate, with the term “bias” appearing only once in the Risk Management section (SEC. 206, 4.b.1.b) in the original version, and being removed from the latest version. Given the profound impact of biased datasets on AI decision-making, a robust regulatory framework should mandate comprehensive documentation and verification processes for each AI model, ensuring data integrity and fairness. This should include:

  1. Data Source Identification – Clearly documenting the origin of training datasets to trace potential sources of bias.
  2. Bias Evaluation and Documentation – Requiring AI developers to quantify and report the level of bias detected in the dataset before model deployment.
  3. Dataset Size Specifications – Ensuring AI models are trained on sufficiently large and representative datasets to reduce overfitting and bias.
  4. Security Classification – Establishing data classification standards to protect sensitive information and prevent misuse.
  5. Model Processing Time Metrics – Documenting the computational time and resources required for training and inference, ensuring efficiency and scalability.
  6. Output Score Documentation – Implementing standardized scoring metrics for AI-generated outputs to assess reliability, accuracy, and bias levels.

To further enhance transparency, every AI model developed should include a bias score, indicating the extent and nature of detected biases within the processed dataset. This would provide greater accountability and allow regulators, developers, and end-users to make informed decisions about AI adoption and deployment. Without explicit DataOps regulations, AI-driven models are subject to unsupervised data sets, and potentially bad data, resulting in faulty models that will produce false predictions.

Regulatory Approach and the Role of Government

A key concern in AI regulation is the extent to which the government should intervene in shaping AI services. While some level of oversight is necessary to ensure ethical AI development, excessive government control risks stifling innovation. Many legislators see AI regulation as an opportunity to expand governmental authority, yet history has shown that overregulation can cripple progress rather than facilitate responsible development.

The debate over AI governance has sparked contrasting viewpoints on the balance between regulation and innovation. Senator Ted Cruz has highlighted the risks of heavy-handed AI regulation, stating, “Committee Ranking Member Ted Cruz contrasted what he characterized as the EU’s overly bureaucratic approach to AI regulation with what he said has traditionally been a more entrepreneurial, American approach” (Samp, T., et al., 2024). This raises an important question: should the government focus on how to regulate AI, or should it first determine what actually requires regulation? A measured approach is essential—regulate only where necessary while fostering an environment that encourages technological advancement and ethical responsibility.

As in AI processing itself, patterns emerge in legislative proposals, many reflect bias (personal or political) and a lack of technical expertise from the policymakers promoting them. One example that merits closer scrutiny is Senator John Hickenlooper’s introduction of the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769) (Congress.gov, 2024). While the act appears well-intended, Sen. Hickenlooper lacks a background in technology or AI services, raising concerns about whether he is equipped to craft effective AI regulations. Furthermore, the bill grants significant regulatory power to the National Institute of Standards and Technology (NIST)—an agency headquartered in Maryland but with a significant presence in Colorado, the senator’s home state. This raises valid questions about the motivations behind this allocation of power and whether it reflects a genuine regulatory need or political maneuvering.

Not all regulatory efforts have been misguided. California has taken a more targeted approach with SB 942, the California AI Transparency Act, which was passed in September 2023 (California Legislative Information, 2023). This law focuses on transparency in generative AI systems, requiring providers with over one million monthly users to:

  • Offer AI content detection tools that are publicly accessible.
  • Provide users with an option to include visible watermarks on AI-generated content.
  • Automatically add metadata disclosures to AI-generated content.

Violations of this law will carry civil penalties of $5,000 per infraction, enforceable by the Attorney General, city attorneys, or county counsels, with the regulation taking effect on January 1, 2026.

This targeted, structured approach serves as a model for how AI regulation should be framed—addressing specific risks while avoiding unnecessary government overreach. The future of AI governance should strike a balance between ensuring accountability and preserving the spirit of innovation that drives technological progress.

Recommendations for Effective Regulation

To create a meaningful and effective regulatory framework, governing bodies should:

  • Establish clear terminologies and definitions of the mentioned AI services
  • Define the processes involved in the various development of services from the human developer and from the machine development
  • Identify the resources required in all this environment
  • Define Best Practices in AI related coding
  • Identify and define a true Risk Management policy
  • Based on the above create a “DO NOT DO” check list
  • Define the control sets to monitor and enforce the regulations
  • Define clear penalties to any non-compliant services
  • Perform a study on the meaning of self-coding specifically
  • Based on the above, create a Best Practice for this and create an ongoing knowledge base library for the matter

Conclusion

The current regulatory approach, characterized by multiple government stakeholders and competing agendas, risks creating a fragmented and ineffective framework. A more effective approach would involve greater input from AI industry practitioners in formulating regulations, partnering up with the governmental bodies to build a supervision and enforcement force.

The particular challenges posed by self-coding platforms demand special attention within any regulatory framework. These systems represent a paradigm shift in AI development, where the traditional boundaries between developer and code become increasingly blurred. Future regulations must specifically address the unique risks and challenges these systems present while maintaining the balance between innovation and safety.

References

Samp, T., Tobey, D., Phillips, S., Fleischman, M., & Loud, T. (2024, August 2). Major AI legislation advances in Senate: Key points. DLA Piper.

“Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769)” 118th Congress (2023-2024). https://www.congress.gov/bill/118th-congress/senate-bill/4769/text.

California AI Transparency Act, SB 942, 2023-2024 Reg. Sess. (Cal. 2023). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB942

Klobuchar, A. (2023). Klobuchar, Thune, Wicker, Hickenlooper, Capito, Luján introduce bipartisan legislation to promote AI innovation, accountability [Press release].

Willis, G. (2023, January 10). DataOps explained: How to not screw it up. Towards Data Science.

DataOps.live. (2023). What is DataOps? https://www.dataops.live/what-is-dataops

Leave a Reply

Your email address will not be published. Required fields are marked *