“FEDERAL TIMES” By Bob Gaines and Michael Spinali
“Three strategies agencies can employ to overcome challenges and create a trusted generative AI platform that will improve processes and citizen services.”
________________________________________________________________________________
“Generative AI holds great potential for improving federal government efficiency, workflow processes, operational efficiencies, and the delivery of citizen services, but there are several key considerations agencies must focus on to make the technology work for them. Maintaining good data governance to ensure the AI is trustworthy and accurate is critical, as is establishing an underlying architecture that supports model portability and scalability.
Today, most agencies are resource-constrained and still rely on restrictive legacy toolsets, which limit their agility and innovation while inhibiting them from taking full advantage of AI.
Here are three strategies agencies can employ to overcome these challenges and create a trusted generative AI platform that will improve processes and citizen services for many years to come.
Combine external, internal data
The models used by generative AI solutions are so advanced it can be difficult to know the lineage of their datasets, which is important for trustworthiness. The data consumed by these tools can be controlled via model training, but that is a very manual process that can significantly extend the time it takes for agencies to introduce new services.
A better solution is to have the AI pull data from both publicly available information via a Large Language Model and an internal content store. The AI combines intelligence gathered from the LLM with a database curated by the agency to deliver more accurate and reliable recommendations. In short, the user gets both the answer and the supporting resources to allow them to validate the response. This builds trust.
For instance, let’s say a user at the Social Security Administration receives an application for disability assistance and needs to check the applicant’s medical history, earnings, and other factors that determine whether they can receive financial aid. In this case, the applicant’s sensitive health information is kept in a secure content store and only accessed at the time the user submits a query.
This process is called Retrieval-Augmented Generation. RAG is the ability to write a personalized and summarized response that concisely addresses an applicant’s request while validating any secured personally identifiable information (PII). The result is a fast and personalized response that reduces questions and the need for follow-up phone calls, which improves efficiency.
With RAG, the data store can easily be updated to include new information (for example, if policies change) and agencies have tight control over the data being used by their AI platforms. Furthermore, agencies can always be assured that their users are receiving the most accurate outputs and recommendations.
In addition to using secure content stores, agencies must fine-tune LLMs with internal data to minimize any bias inherent in publicly available data. This will improve the effectiveness and accuracy of outputs.
Use an open-source architecture
Developers can use AI to shorten development cycles, boost productivity, and accelerate the delivery of citizen services, provided they are using the right tools and architectures to do so. The good news is that these tools do not have to be proprietary or restrictive. Open source solutions like Python, PyTorch, and TensorFlow allow developers to develop a job once and run that job anywhere, on-premise or in the cloud, and on the best hardware solution for that task.
Meanwhile, SYCL allows developers to easily migrate their code between various hardware accelerators, including legacy technologies, without having to rewrite it, which saves time. It gives agencies the freedom to easily port applications to different hardware instances as compute needs grow, thereby solving model portability and scalability challenges.
Hiring or upskilling the right employees
Implementing a successful generative AI workflow is predicated not only on technology but also on having the right people to coordinate and manage the entire system. Agencies must hire or upskill employees who are familiar with open-source platforms and frameworks and the Kubernetes container orchestration system to expedite the development of intelligent applications and services.
Agencies will also need users who are experienced in managing high-performance AI accelerators that provide more processing power than traditional computer processing units. High-performance computing will allow agencies to venture from commonplace processes to the more demanding AI workloads. For instance, not only will they be able to analyze applications and contracts intelligently and automatically, but they will also be able to create models that help predict weather patterns and assist in drug delivery or other projects.
Agencies must also have appropriate processing capabilities to handle HPC workloads. Organizations may want to consider options with higher core frequencies, large memory caches, and other options that provide a combination of power and efficiency.
It all begins now, with creating trusted and accurate generative AI that expedites workflows and the delivery of services. Agencies that embrace data control mechanisms, scalable architectures, and open technologies while investing in the appropriate skills will have the inside edge. They’ll be well on their way to providing citizens with innovative and impactful solutions.”
Bob Gaines is senior director of accelerated computing and Michael Spinali is a GPU and AI solutions architect at Intel.
Comentários