DAO: The Benefits of AI for Local Governments Virginia Local Government Management Association VLGMA
We will continue to apply what we learn to drive our own use of AI to fill data and research gaps across these same domains. Government agencies need to carefully consider the ethical implications of using generative AI in different contexts. Agencies also should collaborate with subject matter experts, including lawyers, ethicists, and social scientists, to address the negative risks of AI and mitigate them. Whereas other emerging technologies often require significant technical training and staff resources, generative AI tools based on natural language inputs allow cities of all sizes to benefit from its value.
Two cities involved (Tampere and Turku) carried out well-being surveys among its student populations, and – based on its results – clustered the students into the groups in need of different support. The study has discovered that factors such as reliability of the public transportation, and quality of the natural environment plays an important role in students’ well-being. Migrants and refugees are by definition in a rather precarious situation, and the decisions that they are subject to are complex, highly discretionary, and not easily reduced to a binary option. Experiences using predictive analytics in similar high stake contexts, such as policing, suggest that technological solutions are subject to the very same biases and errors as human decision-making, yet amplifying systemic injustices and automating inequalities, while providing suboptimal appeal routes and transparency standards. A disadvantage of AI in creativity is the potential lack of originality and authenticity in AI-generated creative works.
Benefits of Artificial Intelligence for Government
However, AI applications also present significant opportunity to close equity gaps across domains such as health care, housing, criminal justice, and public benefit administration. As government agencies balance innovation and responsibility in serving the public’s interest, generative AI will require agency leaders to confront a new set of dynamics and begin recalibrating several strategic decisions in the near term, the study concludes. Among other conclusions from the findings, the study recommends agency executives prepare for a faster pace of change and establish flexible governance policies that can evolve as AI applications evolve. However, the findings suggest a deeper story about the need for training, according to Steve Faehl, federal security chief technology officer at Microsoft, after previewing the results. Artificial intelligence (AI) is changing the landscape of work and local governments are by no means exempt.
This paper in particular serves as an input to a landscaping exercise of AI governance and regulatory frameworks in the EU and its comparison with countries that are considered vanguard in the field. It relies on the overview of existing legal and policy instruments that affect three selected OECD countries (Canada, Finland and Poland) as well as matching case studies of the use of AI in the public sector in those countries. These two elements are complemented by the forward looking analysis of the goals, drivers, barriers and risks for the use of AI in the public sector. One advantage of AI in creativity is its ability to augment human creativity and provide new avenues for artistic expression.
Documentation of datasets, specifying the boundary conditions for AI usage, relying on AI techniques that are more explainable, and deconstructing how AI models arrived at the decision (aka explainability) are some example approaches to increase transparency and trust. But it also carries risks, and as a result, it requires legal work that is out of the ordinary. The two reasons that should motivate this paradigm shift lie, on the one hand, in the difference in the speed of development of AI versus the speed of adaptation of conventional legal texts, and, on the other hand, in the inefficiency of the traditional regulations — from which the excesses of AI too often escape. The exponential growth of digital data combined with computing power has brought Artificial Intelligence into a new era, offering extremely favourable prospects for its development and implementation in many sectors of the economy and society. A by-design culture allows a company to treat ethics as a first-class citizen, not an afterthought. This should be as commonplace as climate change awareness or fair-trade supply chains agreements.
Chile’s road to algorithmic transparency: Setting new…
By creating an AI robot that can perform perilous tasks on our behalf, we can get beyond many of the dangerous restrictions that humans face. It can be utilized effectively in any type of natural or man-made calamity, whether it be going to Mars, defusing a bomb, exploring the deepest regions of the oceans, or mining for coal and oil. An example of this is AI-powered recruitment systems that screen job applicants based on skills and qualifications rather than demographics.
- The two reasons that should motivate this paradigm shift lie, on the one hand, in the difference in the speed of development of AI versus the speed of adaptation of conventional legal texts, and, on the other hand, in the inefficiency of the traditional regulations — from which the excesses of AI too often escape.
- This data-centric approach empowers government officials to make evidence-based choices, leading to more effective outcomes and improved public service.
- This is why it is important to carefully consider the data that is used to train AI algorithms.
- AI-driven data analysis allows government officials to analyze complex data sets quickly and efficiently.
- To increase the transparency and explainability of Deep Learning systems, technical and non-technical solutions are emerging.
For one thing, he noted, as the work of MIT economist Martin Beraja shows, China has been exporting AI surveillance technologies to governments in many developing countries. For another, he noted, countries that have made overall economic progress by employing more of their citizens in low-wage industries might find labor force participation being undercut by AI developments. Systems for the public to test, we need to make sure that they are robust before they’re released—and that they strengthen democracy rather than undermine it. Companies employ teams to interact with early versions of their software to teach them which outputs are useful and which are not. These paid users train the models to align to corporate interests, with applications like web search (integrating commercial advertisements) and business productivity assistive software in mind.
Organisations that can operate ethical, secure, and trustworthy AI, will be more appealing to the 21st century conscious consumer. Consequently, as the portfolio of existing regulations for safety-critical or sensitive areas applies, Artificial Intelligence does not need more or different regulation than classical software. What AI requires though is the same diligent definition of functional requirements and the verification of how well those requirements are satisfied before a product is released to its users. The Urban Institute is a leading source of equity measurement and analysis for decisionmakers who want to understand what policies are working and for whom.
AuroraAI is thought of as a platform or a service network, where the public operator sets specific technological- and process-requirements, as well as ethical boundaries, allowing anyone to develop their value proposition within the platform. The rationale of the project stems from the growing sustainability gap of the public finances, and the deterioration of the dependency rate, and a hope that new, personalized service chains will cater better to the changing realities of the XXI century. Introduction of digital technologies in general, and in the public sector in particular is often portrayed as beneficial to the end users. Yet, are the processes happening under the banners of ‘democratization’, ‘convenience’ and ‘choice’ serving its advertised purposes? Or are these disguised attempts to strengthen the grip of control over the citizens? In other words – is AI facilitating the power shift between the public sector and citizens or merely intensifying existing distribution?
Federal chief data officers seek information on synthetic data generation
Additionally, there are only 62 companies with more than one contract but there are 245 with just one. There is only one vendor with more than 10 contracts (AI Solutions) and only two others with more than five (AI Signal Research and AI Biosciences). “The AI understands an unstructured query, and it understands unstructured data,” Mason explained. For example, autonomous vehicle companies could use the reams of data they’re collecting to identify new revenue streams related to insurance, while an insurance company could apply AI to its vast data stores to get into fleet management. Executives can use AI for business model expansion, experts said, noting that organizations are seeing new opportunities as they deploy data, analytics and intelligence into the enterprise. The article says that noncitizens were flagged as being at a higher risk for fraud, but does not say if there was actually a correlation between noncitizens and fraud.
- Introduction of digital technologies in general, and in the public sector in particular is often portrayed as beneficial to the end users.
- While the prevention and recovery of unlawful benefits payouts is showing promise in the Department of Work and Pensions, there are many other AI applications in the works including the automatic processing of 42% of documentation submitted with benefits claims.
- Country with strong regulations, rule of law, and relatively accountable institutions—serves as a warning.
- Most obviously, LLMs could help us formulate and express our perspectives and policy positions, making political arguments more cogent and informed, whether in social media, letters to the editor, or comments to rule-making agencies in response to policy proposals.
- Through these instructions, the Director shall, as appropriate, expand agencies’ reporting on how they are managing risks from their AI use cases and update or replace the guidance originally established in section 5 of Executive Order 13960.
- While it should be up to individual governments to decide how to regulate AI, it is important that they also take a holistic approach, looking at all the different ways in which AI can be regulated.
These developments have to be assessed against the backdrop of wider market conditions that affect Poland – the fourth largest IT graduates pool in the EU (McKinsey, 2016), and the fourth lowest Digital Economy and Society Index score in the EU (European Commission, 2018b). FDA general secretary Dave Penman described the pay offer as “unconscionable given the current economic climate that civil servants face”, and said it would “leave the civil service with the worst pay deal in the public sector by far”. The action came after the government publicised civil service pay remit guidance last week that would provide most civil servants in the UK government with a 4.5% increase, with an additional 0.5% uplift to ensure the lowest paid keep pace with the living wage.
European regulation hopes to rein in ill-behaving algorithms
Functionally, the implementation of AI across federal agencies may face challenges from the 49-year-old Privacy Act, under which federal agencies are prohibited from sharing data across organizations. Federal agencies seem to be embracing AI more rapidly compared to state agencies, according to a 2020 report by the Administrative Conference of the United States, which revealed that 45% of 142 federal departments, agencies, and subagencies were already experimenting with AI tools. Rishi Sunak recently spoke in glowing terms about how AI could transform public services, “from saving teachers hundreds of hours of time spent lesson planning to helping NHS patients get quicker diagnoses and more accurate tests”. Artificial intelligence is typically “trained” on a large dataset and then analyses that data in ways which even those who have developed the tools sometimes do not fully understand.
This raises expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided. To lay the foundation for the special issue that this research article introduces, we present 1) a systematic review of existing literature on the implications of the use of Artificial Intelligence (AI) in public governance and 2) develop a research agenda. First, an assessment based on 26 articles on this topic reveals much exploratory, conceptual, qualitative, and practice-driven research in studies reflecting the increasing complexities of using AI in government – and the resulting implications, opportunities, and risks thereof for public governance. Second, based on both the literature review and the analysis of articles included in this special issue, we propose a research agenda comprising eight process-related recommendations and seven content-related recommendations.
Labor Department names deputy CIO Louis Charlier as chief AI officer
While we may not want to expand the scope of existing agencies to accommodate this task, we have our choice of government labs, like the National Institute of Standards and Technology, the Lawrence Livermore National Laboratory, and other Department of Energy labs, as well as universities and nonprofits, with the A.I. Looking further into the future, these technologies could help groups reach consensus and make decisions. Company DeepMind suggest that LLMs can build bridges between people who disagree, helping bring them to consensus. Science fiction writer Ruthanna Emrys, in her remarkable novel A Half-Built Garden, imagines how A.I. Might help people have better conversations and make better decisions—rather than taking advantage of these biases to maximize profits.
Some technical experts, however, are more confident that humans will remain in the driver’s seat of increasing AI deployment. Alex Dimakis, a professor of electrical engineering and computer science at the University of Texas at Austin, worked on the artificial intelligence commission for the U.S. Automation is also used for time-consuming work in order to “increase work output and efficiency,” according to a statement from the Department of Information Resources.
Using machine learning and deep learning to analyze decades of case law — and millions of past cases — to predict outcomes for future cases and accelerate case resolutions in both domestic and international courts. Of government AI and data analytics investments aim to directly impact real-time operational decisions and outcomes by 2024. If AI is effectively applied across government departments, it can have a positive impact on the healthcare, energy, economic, and agriculture sectors. However, without clear governmental guidelines, AI can jeopardize the well-being of citizens. Similarly, in the United States, government organizations and insurance companies use an AI tool to identify any changes in infrastructure or property. An Australian company NearMap has developed an AI tool that provides land identification and segmentation from aerial images.
Autonomous vehicles, for example, raise questions about liability in the event of accidents. Determining who is responsible when an AI-controlled vehicle is involved in a collision can be complex. Additionally, decisions made by AI systems, such as those related to traffic management or accident avoidance, may need to consider ethical considerations, such as the allocation of limited resources or the protection of passengers versus pedestrians. Balancing these ethical dilemmas and developing appropriate regulations and guidelines for AI in transportation is a complex and ongoing challenge.
A new report, “Gauging the impact of Generative AI on Government,” finds that three-fourths of agency leaders polled said that their agencies have already begun establishing teams to assess the impact of generative AI and are planning to implement initial applications in the coming months. Join Emily Vose, Partner at IBM, and Christian Ward, Chief Data Officer at Yext, as they dive into the opportunities — and pitfalls — of leveraging AI in the federal government. Learn about the opportunities — and pitfalls — of leveraging AI in the federal government. (iv) recommendations for the Department of Defense and the Department of Homeland Security to work together to enhance the use of appropriate authorities for the retention of certain noncitizens of vital importance to national security by the Department of Defense and the Department of Homeland Security. (g) Within 30 days of the date of this order, to increase agency investment in AI, the Technology Modernization Board shall consider, as it deems appropriate and consistent with applicable law, prioritizing funding for AI projects for the Technology Modernization Fund for a period of at least 1 year.
Read more about Benefits Of AI For Government here.