This heightened level of civic participation, while a success for democratic engagement, has presented federal agencies with a significant challenge: the "volume problem."
Agencies now routinely face the task of effectively handling tens of thousands, hundreds of thousands, or even millions of public comments on regulatory and policy matters.
The increasing volume of public comments received by federal agencies poses a challenge that impacts various aspects of their operations.
The sheer number of submissions places a significant strain on agency resources, including the time of personnel dedicated to regulatory review, the budgetary allocations required for processing, and the technological infrastructure needed to manage and analyze vast quantities of data. A single high-profile regulatory proposal can now generate an unprecedented level of public response, sometimes reaching into the millions of individual comments. This surge demands considerable allocation of resources that might otherwise be directed towards other critical agency functions.
The traditional processes of manual review become increasingly inefficient and unsustainable when confronted with such high volumes. The time required to read, categorize, and analyze each comment manually can lead to significant delays in the rulemaking process, hindering an agency's ability to respond promptly to pressing public needs and evolving policy challenges. This bottleneck in the review phase can slow down the entire regulatory lifecycle, potentially diminishing the agency's agility and overall effectiveness.
Ensuring that each unique and substantive comment receives meaningful consideration becomes a formidable task when dealing with millions of submissions. Dockets are often complicated by the presence of duplicate comments, numerous variations of form letters generated by advocacy campaigns, and attachments that can be difficult to process. Agencies currently grapple with the challenge of sifting through these mass-submitted, often duplicative comments, which may lack substantive contributions, to identify the truly valuable insights, data, and perspectives offered by individual commenters. The "noise" generated by non-substantive comments can obscure the crucial feedback that could significantly improve the quality of regulations.
The perception of inadequate review, stemming from the overwhelming volume of comments, can also erode public trust in the regulatory process and undermine the legitimacy of agency decisions. Public engagement is intended to foster a sense of partnership and shared governance; however, if the public believes their input is not being heard or meaningfully considered due to the sheer volume, their willingness to participate in future rulemaking efforts may suffer.
Federal agencies employ a range of methodologies and technologies to manage and analyze the increasing volume of public comments they receive. Regulations.gov serves as the principal platform for federal agencies to solicit public input on proposed regulations and to store the associated background information. Established in 2003, it has become a centralized hub where the public can find, read, and comment on regulatory issues. A significant majority of federal agencies utilize Regulations.gov as their primary electronic comment platform. While this platform provides a centralized system for public access and comment submission, it has faced criticism regarding the reliability of its search function and overall usability, both for the public trying to navigate dockets and for agency staff seeking to analyze comments.
Alongside this digital infrastructure, the traditional approach of manual review by agency staff remains a common practice. This method allows for a deep understanding of complex issues, nuanced perspectives, and the specific context of individual comments. However, with the exponential increase in comment volumes, manual review alone is no longer scalable or efficient enough to handle the workload within reasonable timeframes and resource constraints. The human capacity to thoroughly read and analyze millions of comments is inherently limited.
To augment manual efforts, many agencies utilize automated content analysis tools. These software solutions can assist with tasks such as categorizing comments based on keywords, identifying recurring themes, and flagging duplicate submissions. The Administrative Conference of the United States (ACUS) has even recommended that agencies explore the use of comment analysis software to identify duplicative and inappropriate comments.
In some instances, federal agencies have developed or adopted their own agency-specific platforms and systems for managing public comments. For example, the Federal Communications Commission (FCC) utilizes its Electronic Comment Filing System (ECFS). User groups have generally found such agency-specific platforms easy to access and use for submitting comments. While these tailored systems can be designed to meet the unique requirements and workflows of individual agencies, they may also lead to inconsistencies in data sharing and interoperability across the broader federal government.
Methodology |
Key Features |
Strengths |
Weaknesses |
Examples of Agencies Using It (if known) |
Regulations.gov |
Centralized platform for submission and storage of comments |
Enhances transparency, provides public access |
Criticized for search functionality, usability issues, potential data inconsistencies |
EPA, FCC, HHS, many others |
Manual Review |
Direct reading and analysis of comments by agency staff |
Offers in-depth understanding of nuance and context |
Not scalable for high volumes, resource-intensive, prone to human error and bias |
Across many federal agencies |
Automated Content Analysis |
Software for categorization, keyword analysis, duplicate identification |
Rapid processing of large volumes, improves efficiency in initial filtering |
May lack understanding of complex language, requires careful configuration and validation |
EPA, FCC, HHS, others |
Agency-Specific Platforms |
Tailored systems for individual agency needs |
Can be optimized for specific agency workflows |
May lead to data silos, hinder interagency collaboration, potential inconsistencies in standards |
FCC (ECFS), SEC |
The current methods employed by federal agencies for managing public comments each present a unique set of strengths and weaknesses. Manual review, while limited in its capacity to handle millions of submissions, offers the crucial advantage of in-depth understanding of complex issues, nuanced perspectives, and the specific context surrounding individual comments. This qualitative analysis can uncover valuable insights that automated systems might miss. Regulations.gov, as the primary government-wide platform, plays a vital role in enhancing transparency and providing a central point of access for the public to participate in the rulemaking process. Its widespread adoption ensures a degree of standardization in how comments are submitted and stored.
Automated content analysis tools offer significant benefits in terms of speed and scalability, enabling agencies to rapidly process vast quantities of comments, identify duplicate submissions, and perform initial categorization based on keywords. This can greatly improve efficiency in the early stages of comment review. Agency-specific platforms, when well-designed, can be tailored to meet the unique needs and workflows of individual agencies, potentially offering a more streamlined experience for both commenters and agency staff.
However, each of these approaches also has notable limitations. Manual review is simply not scalable to handle the millions of comments that some rulemakings generate, leading to significant delays and requiring substantial resource allocation. It is also susceptible to human error and potential biases in interpretation. Regulations.gov has been criticized for its limitations in search functionality, which can make it challenging for both the public and agency staff to locate specific information within large dockets. Usability issues can also deter public participation.
Popular automated tools, while efficient for initial processing, may struggle to understand complex language, sarcasm, or nuanced arguments, potentially leading to misinterpretation of sentiment or context. The accuracy of these tools is also heavily dependent on careful configuration and validation.
Agency-specific platforms, while potentially optimized for individual needs, can contribute to data silos, hinder interagency collaboration, and may not always adhere to government-wide standards for accessibility and transparency.
The current landscape of comment processing often involves a combination of these approaches, reflecting the need for agencies to balance the desire for thorough review with the practical constraints imposed by high comment volumes. The effectiveness of any particular method or combination thereof is heavily contingent on the specific context of the rulemaking, the sheer volume and nature of the comments received, and the resources that the agency can allocate to this critical task.
To address the challenges posed by the increasing volume of public comments, federal agencies are exploring innovative strategies and emerging technologies that can significantly enhance the effectiveness of comment processing.
Advanced Natural Language Processing (NLP) holds immense potential for sophisticated text analysis. NLP techniques can be employed for topic modeling to identify the key themes and subjects within a large corpus of comments, sentiment analysis to gauge public opinion and emotional responses, entity recognition to pinpoint specific individuals, organizations, or locations mentioned, and summarization to condense lengthy comments into key points.
Pilot programs, such as the one conducted by the CDO Council, have demonstrated the effectiveness of NLP in aiding the regulatory comment analysis process. The Department of Health and Human Services (HHS) has also tested NLP to process public comments on proposed regulations, achieving significant cost savings and boosting personnel satisfaction.
Machine Learning (ML) algorithms offer another powerful avenue for enhancing comment analysis. ML can be used for topic modeling and sentiment analysis, going beyond basic keyword identification to learn patterns and trends in public comments, providing deeper insights into public opinion. By training ML models on labeled data, agencies can develop systems that automatically categorize comments based on their underlying meaning and assess the nuances of expressed sentiment.
The emergence of Generative AI (GenAI), including Large Language Models (LLMs) like GPT, presents exciting possibilities for transforming public comment processing. These models can assist with summarizing complex comments, drafting initial responses to common queries, and identifying key arguments and supporting evidence within the submitted feedback. Federal agencies, are actively testing the potential of GenAI to supplement the analysis and triage of public comments, aiming to improve efficiency and accuracy.
Beyond these specific technologies, platforms designed specifically for large-scale public consultation offer integrated features for comment submission, organization, analysis, and public reporting. These platforms often provide user-friendly interfaces for the public to submit comments, as well as analytical dashboards and tools for agencies to manage and understand the feedback they receive. Companies like SmartComment and DocketScope offer software solutions tailored to the needs of government agencies for streamlining the public comment process.
Several federal agencies have begun to successfully implement new strategies and technologies to manage the challenge of high volumes of public comments.
One notable example is HHS's use of NLP to analyze public comments submitted for a proposed rule aimed at improving the quality of the Head Start program. By utilizing a large language model, HHS was able to tag comments based on topics selected by policy experts, sentiment (positive, negative, neutral), and intent (e.g., question, suggestion). The system also generated first drafts of topic-based summaries, which were then reviewed and refined by subject matter experts. This initiative demonstrated the practical application of NLP in improving the efficiency of the rulemaking process and providing better baseline information for the development of the final rule.
The FCC's experience with managing the millions of comments received during the Net Neutrality rulemaking periods highlights the application of NLP in addressing the "volume problem." The FCC reportedly used NLP tools to cluster the massive influx of over 22 million comments and to identify potential instances of inauthentic or bot-generated submissions. This demonstrates the role of technology in helping agencies to manage the sheer scale of comments and to identify and filter out less substantive input.
Designing public comment processes with efficiency and effectiveness in mind is crucial for federal agencies seeking to manage high volumes of feedback. One key best practice involves structuring comment requests in a way that encourages more focused and substantive input. Clearly defining the scope of the rulemaking, posing specific questions related to the proposed rule, and guiding commenters on the type of information that would be most helpful to the agency can lead to more relevant and analytical feedback, making the subsequent analysis process more manageable.
Optimizing the user experience on online platforms like Regulations.gov or agency-specific systems is another critical element. This includes ensuring clear and intuitive navigation, robust search functionality that allows users to easily find relevant documents and comments, and user-friendly comment forms that minimize errors and encourage detailed responses. Addressing frustrations with website accessibility and search capabilities, as highlighted in public feedback to the Office of Information and Regulatory Affairs (OIRA), should be a priority.
Agencies should also actively work to encourage commenters to provide focused and substantive input, moving beyond simple expressions of opinion or the submission of form letters. Clearly communicating the agency's need for detailed reasoning, supporting data, and analytical perspectives can incentivize commenters to provide more valuable feedback. Emphasizing that well-supported comments carry more weight than mere volume can also be beneficial.
Finally, adopting diverse engagement methods beyond just written comments can help agencies capture a wider range of perspectives. Supplementing written submissions with virtual meetings, webinars, online forums, and even social media discussions can reach a broader audience and facilitate more interactive and nuanced feedback. This multi-faceted approach can lead to a more comprehensive understanding of public sentiment and concerns.
The legal and regulatory landscape surrounding public comment periods, primarily governed by the Administrative Procedure Act (APA), plays a significant role in how federal agencies must approach the "volume problem." The APA mandates that agencies must consider and respond to all "relevant matter presented" during the notice and comment process. The sheer volume of comments received in modern rulemaking presents a considerable challenge for agencies to effectively demonstrate that they have adequately considered and responded to all significant and relevant input, potentially increasing the risk of legal challenges.
Recent Supreme Court decisions have further emphasized the need for agencies to demonstrate careful analysis and provide reasoned responses to the substantive information presented in public comments. This heightened scrutiny underscores the importance of not just receiving comments but also showing how that feedback has been thoroughly considered in the agency's final decision-making process. Agencies are now under greater pressure to articulate their rationale and to address significant concerns raised by the public.
The influx of mass comments, including form letters and potentially bot-generated submissions, adds another layer of complexity to an agency's ability to meet its legal obligations. While agencies must consider all relevant comments, they also face the challenge of distinguishing between genuine, substantive feedback and the overwhelming volume of less informative submissions. This distinction is crucial for ensuring that agency resources are focused on the input that can most effectively inform the rulemaking process.
To navigate these legal and regulatory considerations in the face of the "volume problem," agencies are exploring various technological and procedural solutions. The integration of AI and ML tools into the comment review process offers a promising avenue for expediting analysis and identifying key themes and arguments. Streamlined internal workflows and clear protocols for managing and responding to comments are also essential for ensuring both efficiency and compliance with the APA's requirements.
Federal agency leadership increasingly recognizes the vital role of public engagement in shaping effective and well-received regulations. They understand that actively seeking and considering input from the American people leads to more responsive and ultimately more effective government. Guidance issued by the Office of Management and Budget (OMB) emphasizes the benefits of public engagement for both government activities and the public, directing agencies to institutionalize principles of effective engagement.
Despite this recognition, agency leaders acknowledge the significant practical hurdles associated with managing the increasing volume of public comments. Common challenges highlighted by leadership include resource constraints, particularly in terms of personnel and budget, the sheer difficulty of analyzing vast amounts of data, and the persistent need to differentiate substantive comments from the large number of form letters and less detailed submissions. Federal agencies often report lacking the necessary time, funding, staffing, or training to adequately address the complexities of public participation in the regulatory process.
In response to these challenges, agency leadership is increasingly prioritizing initiatives aimed at improving public comment management. This includes exploring investments in new technologies and developing updated procedural guidelines to streamline the review process. Central government bodies like OMB are also actively working to provide guidance and support to agencies in modernizing their public engagement and comment management practices, recognizing the government-wide nature of the "volume problem". The development of toolkits and the sharing of best practices are key components of this effort to enhance agencies' capabilities in this critical area.
Based on the analysis of the challenges, current methodologies, emerging technologies, and legal considerations surrounding the "volume problem," several key recommendations and insights emerge for federal agency leadership seeking to enhance their public comment management practices.
Investing in modern technologies, particularly advanced NLP, ML, and GenAI tools, is crucial for automating and significantly enhancing the analysis of public comments. While these technologies offer immense potential for efficiency and scale, it is imperative to maintain human oversight to ensure accuracy, address nuanced arguments, and uphold ethical considerations in their application.
Agencies should prioritize optimizing their public comment platforms, whether Regulations.gov or agency-specific systems, to improve usability and functionality. Enhancements should focus on making search capabilities more robust, providing clear and concise instructions for submitting comments, and ensuring user-friendly interfaces that encourage participation from a diverse range of individuals.
Refining comment solicitation strategies can lead to more effective and manageable feedback. Agencies should strive to structure their comment requests more thoughtfully by asking specific, targeted questions and clearly guiding commenters on the type of information that would be most valuable in informing the agency's decision-making process.
Adopting a multi-channel approach to public engagement, which extends beyond traditional written comments to include virtual meetings, webinars, and online forums, can help agencies reach a wider and more diverse audience, gathering richer and more nuanced feedback.
Developing clear and well-defined internal processes for managing, analyzing, and responding to public comments is essential for ensuring both efficiency and compliance with all applicable legal and regulatory requirements. These processes should outline clear roles and responsibilities and establish protocols for each stage of the comment lifecycle.
Agencies should prioritize their resources on thoroughly analyzing comments that provide data, evidence, and reasoned arguments, while also developing effective strategies for managing the volume of form letters and other non-substantive submissions. This might involve using automated tools to identify and categorize different types of comments, allowing human reviewers to focus on the most impactful feedback.
Enhancing transparency and communication with the public about how their comments are being used in the rulemaking process is crucial for building trust and encouraging continued participation. Agencies should strive to provide feedback to the public on the outcomes of the consultation and clearly articulate how public input has influenced the final rule.
Finally, investing in adequate training for agency staff on the use of new technologies and the implementation of best practices for public comment analysis and engagement is vital for the successful adoption of these strategies. Equipping staff with the necessary skills and knowledge will empower them to effectively manage the challenges of the "volume problem."
AEM's AI team stands out for our expertise in realizing the benefits of human-in the-loop approaches in deep learned systems, and we offer capabilities across a range of traditional ML areas. Contact us at ai@aemcorp.com to explore challenges your team is facing.