Artificial Intelligence (AI) is a consistent trending topic. Most people have heard of its power to revolutionize the information technology industry and your personal life.
But what exactly is AI?
A founder of artificial intelligence, John McCarthy says AI is the science and engineering of making intelligent machines, especially intelligent computer programs. In other words, we are finally able to make computers do things only a human brain could once do. Machines might even be better at some tasks, leaving humans to use their unique skills in different capacities.
As public servants, our brains can only wonder: “How can AI solve some of government’s thorniest problems?”
If this is something that’s been on your mind, you might want to get in touch with Shared Services Canada’s Data Science and Artificial intelligence (DSAI) team. We'll explain why...
The DSAI team’s goal is to unlock the benefits of AI for our clients and the Government of Canada at large by promoting AI best practices, capabilities and technologies.
When we began helping others with their AI journey, we noticed there are many misconceptions and missing parts, such as insufficient data to bring about a successful AI project.
As a small (but mighty) team with a broad mandate, we wanted to make sure we were working with clients whose use-cases had the most potential for success!
Successful use cases such as the Client Satisfaction Feedback Indicators (CSFI), where much of the data collected from Shared Services Canada's (SSC) client satisfaction survey is unstructured feedback written from clients or transcribed from interviews. Fortunately, DSAI was able to use Natural Language Processing (NLP) techniques to extract the clients’ critical concerns, both positive and negative. This information could then be graphed to establish trends over time to see if our efforts are meeting clients expectations.
After seeing some great...and not so great use cases, we wondered “What are the fundamental pillars to replicate a successful AI project?”
Well, after doing some research on the best approaches to develop an AI use case, the team came up with the Viability Model (VMo) tool. The aim of DSAI’s VMo tool is to inform decision-makers by assessing AI projects for their potential for success and risks for failure. VMo should be used when evaluating use cases for entry into the development pipeline.
The tool has already gone through multiple versions and technological changes over the past year.
At first, the team built the tool using Excel since it was a great way for us to simply tweak our statistical model and our assessment questionnaire.
As the tool became more complex, we wanted to have a tool that was simple to use without the need to hand-hold clients through the assessment process. So, we re-wrote the tool as a web hosted application.
To design our assessment properly, we reviewed research papers on effective AI life cycle management. From our reading and experience, successful AI projects go through the following workflow:
- Defining the business problem AI will address
- Identifying suitable data
- Selecting machine learning models
- Training the models on data
- Testing the models
- Deploying models
- Integrating models with user tools
- Monitoring deployed models for accuracy and drift
Assessing barriers in the AI life cycle would allow us to discuss potential issues with our clients before the project even begins.
Once clients enter the application, they access an assessment that is a series of 39 questions to assess each phase of the life cycle. The scores are calculated by answering several yes/no questions and by rating your confidence on your response on a scale from 1 to 10.
We have designed the questionnaire so that each question is weighted to generate an overall score. As for the confidence scores, we use them to give upper and lower bounds on both sides of the success and risk scores.
Once the questionnaire is completed, we aggregate the scores for each of the life cycle phases and send it to our final step which is the report section.
Based on how the recipient answers the questionnaire and their confidence a report is produced.
The report has 3 sections:
- Displaying the radar diagrams on the overall success/risk overall scores on each life cycle. We also display one diagram for the upper/lower success bounds and finally one for the upper/lower risk bounds. In this case, the tighter the upper and lower bounds are together, the more confident the recipient is in their ability.
- Returning the profile information that was sent to us to see who submitted the assessment questionnaire.
- Providing recommendations. We are really excited for this section since extra work was done to specifically tailor our recommendations based on how the recipient answered in the assessment questionnaire.
Note that we added a Print Report button to be able to save it and a unique reference key to the report for each assessment so we can go back to it without completing the questionnaire each time.
Once you have your report in hand, you can send an email to firstname.lastname@example.org with your results and we'll gladly help guide you through the next steps.
For those that are interested in what technology we use to develop this tool, here’s a high-level view of our stack:
- For the front end we are using the ReactJS library with the NextJS framework.
- On the backend, we wrote a REST API using NodeJS and ExpressJS.
- Since we are leveraging Amazon Web Services as our cloud provider, we decided to go ahead with a DynamoDB database which is a NoSQL database
As of right now, VMo is only available under SSC’s VPN network and a select few others. For the future of VMo, we would like to create a great tool for all of the Government of Canada.
VMo should be used for continuously evaluating a project in the pipeline by giving a dashboard of success and risk factors over the course of a project. The idea here, is gathering enough data to train enhanced statistical models to deliver and develop Key Performance Indicators (KPI) feedback, change behavior, and drive training to increase accuracy and performance for new initiatives.
We would like to eventually add a Likert Scale, slider bars, and branching to enhance the tool.
AI products are inevitable, and deciding on producing them for the Government of Canada can be made easier with the VMo tool. It is made to help assessing potential AI projects for their potential for success and risks for failure. It is currently only available through the DSAI team, but has hopes to expand across government.
For more information on VMo, please contact us at email@example.com
Courses & Events:
January 17, 2022 | Artificial Intelligence Is Here Series: How AI Is Transforming the Economy (event)
Self-paced | Getting Started with Machine Learning (DDN220)