On 30th November 2022, Open AI released ChatGPT into the world. The adoption and take up of the AI functionality have been massive and launched a technological race among other companies, including tech giants such as Google and Amazon, to bring competitive AI products to market.
The latest AI has caused a paradigm shift in scientific progress and has been an extremely hot topic for almost every sector in society. But, in reality, how much do most of us know about AI and how it is used?
In this article, we are going to look at what AI is, what it does, how we use it, some of the pitfalls of AI adoption and how we can keep our partners and customers safe so that we continue to be a trusted source of information and advice.
AI is the ability of a computer, or a robot controlled by a computer, to do tasks that would otherwise be completed by a human – the tasks require human intelligence and discernment.
ChatGPT is a particular type of AI, it is generative AI and, specifically, a large language model (LLM). Generative AI enables the creation of unique and entirely new content, including text, images, videos, and sound, when prompted to do so by a user.
Essentially, ‘artificial intelligence’ is a transient term; what we are thinking of as AI right now may not be seen as such in a few years’ time or even in the next few months. For example, a calculator function used to solve a calculation rather than using pen and paper would once have been considered AI, but now a lot of this older and more established AI functionality is invisible to us because it’s behind the scenes. It’s been powering products we are familiar with for years – think about the ‘recommended suggestions’ at the end of a Spotify playlist you’ve created. What powers those suggestions? Using data the platform already holds about your behaviour and preferences, and data about what other people have listened to, AI will attempt to predict what you might want to add to your playlist. Every time you add or reject a suggestion, the platform ‘learns’ a bit more about you and its catalogue of music.
The transitory nature of AI might seem obvious, but it’s an important point because most of the ongoing advancements in AI are trying to be completely original and are being designed to provide unique advantages that will differentiate a business’s offering. They are far-reaching and strategic.
When we start to think about what AI adoption feels like, we need to distinguish between commodity AI and what we call ‘space race AI’.
Commodity AI is the type of AI developed and/or adopted by numerous businesses and used across many applications. It can be bought and used relatively easily to operate tools for your business. These tools will make you more effective but won’t make your operations any better than, or different to, other businesses using the same or similar AI. It’s a bit like buying a washing machine for your business – you buy the product, then you will need to plumb it in and operate it according to your needs (and the instructions!). It will help you wash clothes faster and hopefully better than washing them by hand, but, as soon as your competitor buys a washing machine, they will also have the same benefits.
In terms of commodity AI, we could look at Teams and other Microsoft products such as Copilot as examples. These commodity tools use AI in lots of ways, such as providing language translation, creating meeting transcriptions, and intelligently adapting to each user’s needs based on individual user information. They might make our working lives more efficient, but our competitors also have them.
Space race AI is about innovation and attempting to do something never done before – like putting humans on the moon. It’s hard work, expensive, and if you get it wrong, the consequences can be catastrophic. However, the rewards are in achieving a winning market position and providing something exclusive. For IDP this means leveraging our unique large-scale data into tools and systems that can’t be replicated by other businesses and providing the best, most advanced products and services for our partners (institutions) and customers (students).
IDP is leading in utilising data science and AI technology in the international education recruitment sector. Our scale and unified business model give us unique opportunities to improve how we work and differentiate our offering. IDP has the best data science team in the sector, having commenced building it in 2020, but it is the human expertise and knowledge of our 2,300 counsellors and entire global community, combined with the volume of data we handle, that provide the full value, benefit, and opportunity.
When explaining how we use AI we should first differentiate between AI and automation. For example, in the field of admissions we are already using automation to provide innovation and efficiency in our FastLane service. FastLane combines skilled human input with data matching determined by institution rules. This means that students who meet live institution-set criteria can receive a speedy offer-in-principle. This in turn initiates an accelerated process to submit and verify (by humans) the application and fast-track the applicant through admissions. Our case studies on FastLane show it has improved Application to Offer rates and is helping institutions save significant amounts of time.
Like most businesses, IDP is on a journey with AI technology and innovation, and for us this journey is primarily focused on looking at the good, the bad, the baddies, and the future.
The Good!
First the positives of AI – using AI technology can help speed up processes, provide answers to questions and help direct users to the information that’s right for them. When AI is introduced to a tool you already have you won’t necessarily notice anything different, but it will work better, everything will be smoother, and it might seem to know exactly what you want to do next or it will direct you to the right things for your individual journey.
The most notable example of IDP using AI is in lead scoring and student placement. We’ve been using the technology for around three years now and this is possibly the most obvious area of use in our model. AI technology in this instance enables us to assist students efficiently and make sure we provide the most appropriate advice based on the information they, and hundreds of thousands of students before them, have already given us.
There are lots of ways we already use AI or are planning to use it in the future. For example, as students travel through the IDP Live app there are lots of different scenarios available and different ‘calls to action’, so we use an AI model to predict the right call to action for each student depending on their situation/journey. This means we are helping them more efficiently.
One area where we are achieving strong results is in course shortlisting on the app. We have developed a course recommendation engine to predict which course or programme a student will be interested in based on more than a million historic applications and millions of shortlisting and click stream events. This achieves a high shortlisting rate with around 60 to 70% of first-time visitors to the app shortlisting at least one course.
We also currently use AI to help recommend the right content for a student. The AI looks at browsing patterns and selects content that will assist the student on their journey. In the future we will be integrating this technology onto the main IDP website, and this will make it feel more like a social media feed, full of what’s pertinent and appropriate for the individual student, rather than a corporate brochure site. This in turn will provide much more data for the AI and help it to become more relevant to the users.
With all this in mind, we acknowledge that we are a high-stakes business and we provide high-stakes advice at extremely important points in our customers’ lives. The information we provide is often pivotal in important decision making. It is therefore crucial that customers trust the advice we give.
So, should we just adopt AI in every facet of our business because it has been proven to be useful in some areas of what we do? The short answer is ‘no’… not necessarily. We need to make sure every use of AI has benefit and adds value – for our partners, for students and for IDP.
The Bad!
Like any developing technology, AI has weaknesses. These weaknesses pose challenges and therefore present risks. AI is only as good as the training data it is given and the input must be unbiased and broad.
Bad Training
For example, the image above represents training AI to understand the difference between a wolf and a dog.* All the pictures of wolves are on snow and all the pictures of dogs are not on snow. The AI doesn’t know what a dog is, so it focuses on the snow because the training data was not fully representative. This therefore makes AI vulnerable to training data that is biased and incomplete, and it can be very difficult to capture all the data needed for all processes.
IDP collects data on when offers are made to our students, but there are certain elements of data that are not always received. For example, when an offer is not accepted, we might not know why the application wasn’t finalised. Ideally, and to be able to train our AI systems, we need to understand the missing factors and know what has happened to students through the entirety of the offer process.
Where IDP has gaps in our data we are working to capture it in as much detail as possible across all platforms, so that we can provide the most comprehensive AI training input that we can and our space race AI can be the best it can be.
*Example posited by Besse, P., Castets-Ranard, C., Garivier, A., and Loubes, J-M., (2018) in: Can Everyday AI be Ethical? Machine Learning Algorithm Fairness (English Version)
AI doesn’t have a conscience
Integrity and morality are important in a business like IDP; our human touch is our strength. We can adapt and empathise, and even say sorry when things don’t go quite as well as we had hoped. AI, however, is not good at admitting when it has got something wrong.
To illustrate this, our Chief Data Officer, Stuart Nickols, asked ChatGPT whether any American presidential candidate had lost the election twice. ChatGPT replied “Yes, several American presidential candidates have lost the election twice”. In explaining its answer, it offered some “notable examples”, including Thomas Jefferson who, it told Stuart, “lost in 1796, but then won in 1800 and 1804”.
Stuart then asked, “How many times has Thomas Jefferson lost?” and ChatGPT clarified that the candidate had “lost the presidential election only once”. Stuart asked ChatGPT to compare its answers. ChatGPT replied that there had been a discrepancy, but then it seemed to contradict itself by offering the same information again. Stuart persisted in questioning ChatGPT about the original question and the answer given. Eventually, ChatGPT admitted there had been an “error” in its first response.
Stuart says, “There are weaknesses in the way that LLMs deal with tasks that you would really expect them to get right. This makes it difficult for businesses to put AI fully into high stakes processes.
“Ultimately, ChatGPT is infinitely clever and infinitely stupid as well. It shows no remorse when it gets things wrong, it just says, ‘I was wrong’. It's not completely trustworthy and it doesn't have a conscience. So, when we're talking about IDP’s connected community, these things really matter and we must make sure that when we integrate this incredibly powerful technology, with its huge potential, into our processes, it can’t make these sorts of mistakes.”
Dealing with AI’s fallibility
When most of us think about powerful technology we think of super cool, super sleek space age design that is perfect in all respects, but, as we have seen, AI is fallible and needs elements of human control in place, especially when customer actions are dependent on how the technology handles a process.
The way that most businesses, including IDP, are dealing with this issue is to use a virtual ‘stop’ button; a way to ensure that humans remain in the loop, with the ability to stop the process from getting things wrong. IDP’s extensive human expertise will be positioned to make sure our customers are not exposed to AI fallibility. Our counsellors will be at the forefront for students and will be holding on tight to the technology.
The Baddies! (adversaries and bad actors)
The internet is wonderful; it has enabled us to do and have extraordinary things and gives us access to a global menu of virtually everything. But, and this is not new news to anyone, the internet has a dark side. Sadly, whenever you use the internet these days you have to assume that bad actors could be part of that interaction.
If we, as a business, plug anything into the internet it is exposed to bad actors and adversaries and they will try to unpick, expose, and infiltrate everything that we put out there and make open. Humans are much more creative than AI, at least they are right now, and there have been several examples of adversarial acts that have fooled or confused AI.
As the uptake of AI technology increases in the higher education sector, governments and institutions are focusing on quality and compliance to ensure that students are set up in the best way possible for success in their studies. In the field of English Language testing, the AI used by some testing platforms has been extremely vulnerable to adversaries, so, IDP is committed to ensuring we provide the best, most secure services possible that will mean students can present trusted language test results to admissions teams.
Despite the baddies, we know that AI provides great opportunity. We have sector-leading capabilities in the field and we use AI where we know it works and is proven to be fair. We are extremely cautious about how AI is used in our high-stakes advice models and especially in situations where we know we have adversaries.
The Future
While it’s impossible to predict the future, we can be fairly certain that AI will become better, more powerful and cheaper. One common question that is frequently put to us is, ‘Will AI take my job?’ Our answer is ‘no, AI will not take your job, but eventually your job may be carried out by someone using AI.’
Matt Toohey, Chief Information Officer at IDP, says, “It is our job to make sure we give our team members the best AI toolkits, both from a commodity AI point of view and an AI space race outlook.
“Full AI automation is incredibly difficult because AI is incompetent in ways we cannot predict. It is powerful and affords us great opportunity, but humans are much better at countering human adversaries.”
It is therefore incumbent upon us to develop the new AI that we want to power our products and which will help us to be strategically ahead of other businesses in the international education recruitment sector. At the same time, we are dedicated to the safety of our partners and customers and know the work we undertake has little or no room for error or misjudgment. We may use AI for some things but will not trust it for the ‘big decision’ processes.
IDP, as a business, is pro AI. Our technological innovation and scale make us a stand-out performer in the sector and within our model we are constantly looking for the opportunities that will enable us to provide the best, most accurate advice. But we must always be aware of the negatives and utilise an approach to AI that always has one hand hovering over the STOP button so that we can maintain credibility and trust.
In Emerging Futures 4 (EF4) research we asked students for their thoughts on how they use AI technology, how they think it might be used by institutions and when they would prefer human interaction.
Students were asked if they had used or intended to use AI, including ChatGPT, to help write university applications. Of the global cohort, 39% said, ‘Yes’. Students from China were most inclined to use AI for this purpose, with 73% responding ‘Yes’. Globally, 45% of students indicated they would use AI to help them decide which institution to study at, while 47% were open to using it to decide which course to study.
Students rated ‘making the application’ and ‘shortlisting suitable institutions’ as the top two phases at which they would most want human input and advice from a trained counsellor, followed closely by ‘confirming my final choice of institution’.
Of students globally, 41% expected that institutions they are applying to would use AI to assess all or part of their application and 35% said that the use of AI in determining an applicant’s suitability for a course could make the process fairer for all. However, 31% said it may discriminate against certain students. Overall, the most important factor for students when making an application to an institution was that they receive a decision quickly.
Tennealle O’Shannessy, IDP Chief Executive Officer and Managing Director, said, “Combining AI with skilled human interaction allows us to provide students with comprehensive information to assist them in making the best possible choices when they need it most. By putting AI technology in the hands of our expert counsellors, IDP is therefore continuing to enhance human connections in a way that meets students’ needs.
“The use of AI and data science to improve processes in higher education is developing very quickly. It is a very exciting time in the sector, and these advancements will enable the industry to enhance the experience for international students.”
To find out more about IDP’s data development and use of AI, get in touch with us today.
Explore the results of a successful IQ consulting project at MacEwan University
Striving to offer more for partners and students, to enable better outcomes for all.
How a top 5 Canadian university uses IQ services to guide program expansion