May 19, 2020

Neo4j: How a lack of context awareness is hampering AI development

Artificial intelligence
Machine Learning
Emil Eifrem
5 min
Neo4j: How a lack of context awareness is hampering AI development

What do we mean when we say ‘context’? In essence, context is the information that frames something to give it meaning. Taken on its own, a shout could be anything from an expression of joy to warning. In the context of a structured piece of on-stage Grime, it’s what made Stormzy’s appearance at Glastonbury the triumph it was.

The problem is that context doesn’t come free – it has to be discovered. AI (Artificial Intelligence) designers are painfully aware of this and try to avoid the problem by building narrow, but powerful, systems that do one thing extremely well, but don’t scale horizontally and don’t offer really human-level understanding of complexity. 

One increasingly popular way out of this impasse is by extending AI’s power with a graph database-based approach to working with complexity. If you’re not familiar, a graph database is a way of managing data that is different from the traditional relational database (think, Oracle or Microsoft SQL Server), but also other NoSQL approaches (as in MongoDB). Gartner identified enterprise interest in the technology as one of its top trends and, it’s the ’Year of Graph’, according to commentators. It also already has a wide variety of use cases, from Amazon-style shopping recommendations to fraud and money laundering detection.

And increasingly, graph software is being used to power AI and ML (Machine Learning). That’s because its in-built architecture provides that missing context for AI applications, which results in outcomes that are far superior to results from AI systems that don’t attempt to incorporate the background. 

AI that can be trained to help us deal with fraud better would be a huge boost to global GDP. According to Stratistics MRC, the global fraud detection and prevention market was valued at $17.5bn in 2017 and is expected to grow to $120bn by 2026. There’s also been over 48,000 U.S. patents for fraud and anomaly detection issued in the last 10 years, so this is a critical issue.


To address this problem, financial services companies are looking to graphs to reveal predictive patterns, find unusual behaviour, and score influential entities, using contextual information loaded in Machine Learning models. Another use case is finding better automated ways to help fight the problem of opioid abuse in the US. The Association for the Advancement of Artificial Intelligence has piloted use of graphs to detect clusters of interactions between doctors and pharmacies to improve opioid fraud predictions, for example. 

So if there is already AI technology which is adept at helping with specific, well-defined tasks, imagine if we also had AI that could do that but also handle ambiguity. We humans deal with ambiguities by using context to figure out what’s important in a situation, then extend that learning to understanding new situations. We need to assist the AI to be able to do that too – and in such a way that ensures the explainability and transparency of any given decision.

Context-supported AI also helps an AI’s human overseers map and visualise the decision path within the contextual dataset, removing the ‘black box’ aspect of decision-making that can reduce confidence in why the system reached the conclusions/recommendations offered.

Context-supported AI is potentially game-changing 

My firm, Neo4j, is so convinced by the importance of graphs to AI, we have formally submitted a graph and AI proposal to NIST, The US government’s National Institute for Standards and Technology, which is helping to create a plan for the next wave of US AI government standards.

This consultation is about building trust in AI, so really matters: as the Deputy Assistant to the President for Technology Policy, Michael Kratsios, notes, “The information we receive will be critical to Federal engagement in the development of technical standards for AI and strengthening the public’s trust and confidence in the technology.” 

Our proposal said that AI and related applications of intelligent computing, like Machine Learning (ML), are more effective, trustworthy, and robust when supported and interpreted by the kind of contextual information that only graph software can provide. Let’s take a serious AI issue to see why – ethics and unconscious bias in AI. If all we do is teach our computers to reason the exact same we do, with all our flaws and limited human biases, we will end up with systems that may redline and exclude or disfavour certain service users, or discriminate against some socio-economic groups.

As a result, it is our social, business and technical conclusion that context should be incorporated into AI to ensure we apply these technologies in ways that do not violate societal and economic principles. 

AI standards that don't explicitly include contextual information will result in subpar outcomes, as solution providers leave out hugely valuable information. This is just one place where graph software, developed as a way to represent connected data and analyse the relationships in it, can step forward. That’s because graph technology can enrich any dataset to make it more useful, and as a better basis for any AI applications. 

Graph technology could help any and all AI projects – and could help us all, as we become the beneficiaries of more sensitive, accurate and insightful computer systems of the future. 


Share article

Jun 12, 2021

How changing your company's software code can prevent bias

Lisa Roberts, Senior Director ...
3 min
Removing biased terminology from software can help organisations create a more inclusive culture, argues Lisa Roberts, Senior Director of HR at Deltek

Two-third of tech professionals believe organizations aren’t doing enough to address racial inequality. After all, many companies will just hire a DEI consultant, have a few training sessions and call it a day. 

Wanting to take a unique yet impactful approach to DEI, Deltek, the leading global provider of software and solutions for project-based businesses, took a look at  and removed all exclusive terminology in their software code. By removing terms such as ‘master’ and ‘blacklist’ from company coding, Deltek is working to ensure that diversity and inclusion are woven into every aspect of their organization. 

Business Chief North America talks to Lisa Roberts, Senior Director of HR and Leader of Diversity & Inclusion at Deltek to find out more.

Why should businesses today care about removing company bias within their software code?  

We know that words can have a profound impact on people and leave a lasting impression. Many of the words that have been used in a technology environment were created many years ago, and today those words can be harmful to our customers and employees. Businesses should use words that will leave a positive impact and help create a more inclusive culture in their organization

What impact can exclusive terms have on employees? 

Exclusive terms can have a significant impact on employees. It starts with the words we use in our job postings to describe the responsibilities in the position and of course, we also see this in our software code and other areas of the business. Exclusive terminology can be hurtful, and even make employees feel unwelcome. That can impact a person’s desire to join the team, stay at a company, or ultimately decide to leave. All of these critical actions impact the bottom line to the organization.    

Please explain how Deltek has removed bias terminology from its software code

Deltek’s engineering team has removed biased terminology from our products, as well as from our documentation. The terms we focused on first that were easy to identify include blacklist, whitelist, and master/slave relationships in data architecture. We have also made some progress in removing gendered language, such as changing he and she to they in some documentation, as well as heteronormative language. We see this most commonly in pick lists that ask to identify someone as your husband or wife. The work is not done, but we are proud of how far we’ve come with this exercise!

What steps is Deltek taking to ensure biased terminology doesn’t end up in its code in the future?

What we are doing at Deltek, and what other organizations can do, is to put accountability on employees to recognize when this is happening – if you see something, say something! We also listen to feedback our customers give us and have heard their feedback on this topic. Those are both very reactive things of course, but we are also proactive. We have created guidance that identifies words that are more inclusive and also just good practice for communicating in a way that includes and respects others.

What advice would you give to other HR leaders who are looking to enhance DEI efforts within company technology? 

My simple advice is to start with what makes sense to your organization and culture. Doing nothing is worse than doing something. And one of the best places to start is by acknowledging this is not just an HR initiative. Every employee owns the success of D&I efforts, and employees want to help the organization be better. For example, removing bias terminology was an action initiated by our Engineering and Product Strategy teams at Deltek, not HR. You can solicit the voices of employees by asking for feedback in engagement surveys, focus groups, and town halls. We hear great recommendations from employees and take those opportunities to improve. 


Share article