FaceTec: Why the Samsung S10 proves liveness detection is needed in facial recognition
Looking back on this year’s Mobile World Congress (MWC), it was clear that every major player in the mobility industry wanted to stand out in the heavily saturated mobility industry’s sea of products. From Huawei introducing the sleek foldable smartphone, to a Nokia phone with five integrated cameras, each business was peacocking their latest devices at this year’s event.
While there were many interesting announcements from the annual event, it was Samsung’s latest release, the S10, that caught our attention, specifically its latest security measures, and its journey to developing a passwordless experience for its customers.
The Samsung device leverages different methods of mobile security measures, including the world's first ultrasonic in-screen fingerprint scanner, a nifty feature which unlocks the Galaxy S10 using sound waves.
Samsung has also once again integrated facial recognition in its devices, which first appeared in the ill-fated Galaxy Note 7. Similar to Apple’s Face ID, users can access their phones using facial biometrics. Once a novel idea we once considered only in science fiction, facial recognition has now matured and become available to smartphone users around the world who prefer a modern layer of security.
Is biometric recognition a layer of security or just a convenience?
There has been much talk regarding whether facial recognition and fingerprint scanners are a viable form of security. In particular, the facial recognition function has been called into question once again with the latest release from Samsung. The S10’s facial recognition was publically spoofed in a matter of weeks after its launch, and videos continue to surface displaying how simple, drawn pictures can produce false positives, unlocking the device with ease.
Interestingly, unlike past responses from both Apple and Samsung that continued to promote their facial recognition as a security feature after they were spoofed, Samsung quickly stated that its latest fingerprint scanner is much more secure than their facial recognition and therefore could be used instead.
Why fingerprint technology can be an inadequate security measure
In a bid to usher in the age of passwordless security, mobile manufacturers have integrated fingerprint scanners into their devices. Using this biometric method, customers can access their data using their fingerprint and the manufacturers have positioned this as a secure method of protecting data, and ultimately highly resistant to spoofing.
But in this use case, there are some inherent issues. Fingerprints can wear down, and not work when wet or very dry or when the sensor is dirty. But a vast majority of sensors also don’t truly identify the actual user for proper authentication, they only actually identify the relationship to the device itself – the fingerprint itself is associated with the specific device – as a fingerprint can be duplicated well enough to pass as proper identification. Widely available resin adhesive has been used to create a thin film, for example, to mark attendance for friends at school. Also, it has been reported that not only can fingerprint be counterfeited (even from a photo taken at a distance), but what most fingerprint sensors capture – partials of a fingerprint – are not that unique, particularly to inexpensive sensors like those found on typical mobile phones. Altogether, this casts some doubt on Samsung’s claims regarding their under-screen, ultrasonic sensor, but it is surely a better solution than previous attempts as, unlike standard sensors, it is looking for something that only a live person can provide – not necessarily, though, the legitimate live person.
In most cases, just relying on one popular authentication method is not enough.
Is two-factor authentication (2FA) a foolproof method of security?
The logic is, if one factor is good, then two are better. The inherent problem with that position is if one of the factors is weak, then the attack surface just becomes larger, not smaller. In addition, there are many combinations of 2FA where both are relatively easy to acquire through phishing or brute force attacks. Some experts argue that facial recognition used in conjunction with fingerprint sensors could be a foolproof way of protecting data on personal devices. But, organisations cannot rely on a device and a biometric together on the same “channel”. Moreover, in addition to being independently truly robust, 2FA factors must be independent of each other and one cannot compromise the other. Typical fingerprint implementations in cases like these work for low-risk operations, but for high-risk transactions, it is not enough.
2FA has been bypassed using several different methods, including automated phishing attacks. At least as importantly, because it takes more time (higher “user friction”) the inconvenience has already proven to be more of an impediment to daily use than the value in the promise of higher security was to embrace it.
Why certified liveness detection is essential for anti-spoofing
Face biometrics have gained favor, and for many reasons. Besides the undeniable fact that humans recognize each other primarily through their faces, it is not intrusive to use, faces provide a tremendous amount of data to a sensor for much higher levels of certainty, and access to them is easier in a much wider variety of day-to-day circumstances. But, like other modalities, if not properly implemented, face can be compromised. While better accuracy itself in facial recognition matching is always welcome, it still does not verify what the camera sees is a live human; arguably the most critical factor in authentication. True liveness detection in face authentication (of which facial recognition is a part of) is the ability to verify dozens of unique human traits in real-time. Liveness detection proves the legitimate, correct person is alive and present at the time of access, and not a non-human representation of the real person, like a photo, video or mask.
Intelliwave SiteSense boosts APTIM material tracking
“We’ve been engaged with the APTIM team since early 2019 providing SiteSense, our mobile construction SaaS solution, for their maintenance and construction projects, allowing them to track materials and equipment, and manage inventory.
We have been working with the APTIM team to standardize material tracking processes and procedures, ultimately with the goal of reducing the amount of time spent looking for materials. Industry studies show that better management of materials can lead to a 16% increase in craft labour productivity.
Everyone knows construction is one of the oldest industries but it’s one of the least tech driven comparatively. About 95% of Engineering and Construction data captured goes unused, 13% of working hours are spent looking for data and around 30% of companies have applications that don’t integrate.
With APTIM, we’re looking at early risk detection, through predictive analysis and forecasting of material constraints, integrating with the ecosystem of software platforms and reporting on real-time data with a ‘field-first’ focus – through initiatives like the Digital Foreman. The APTIM team has seen great wins in the field, utilising bar-code technology, to check in thousands of material items quickly compared to manual methods.
There are three key areas when it comes to successful Materials Management in the software sector – culture, technology, and vendor engagement.
Given the state of world affairs, access to data needs to be off site via the cloud to support remote working conditions, providing a ‘single source of truth’ accessed by many parties; the tech sector is always growing, so companies need faster and more reliable access to this cloud data; digital supply chain initiatives engage vendors a lot earlier in the process to drive collaboration and to engage with their clients, which gives more assurance as there is more emphasis on automating data capture.
It’s been a challenging period with the pandemic, particularly for the supply chain. Look what happened in the Suez Canal – things can suddenly impact material costs and availability, and you really have to be more efficient to survive and succeed. Virtual system access can solve some issues and you need to look at data access in a wider net.
Solving problems comes down to better visibility, and proactively solving issues with vendors and enabling construction teams to execute their work. The biggest cause of delays is not being able to provide teams with what they need.
On average 2% of materials are lost or re-ordered, which only factors in the material cost, what is not captured is the duplicated effort of procurement, vendor and shipping costs, all of which have an environmental impact.
As things start to stabilise, APTIM continues to utilize SiteSense to boost efficiencies and solve productivity issues proactively. Integrating with 3D/4D modelling is just the precipice of what we can do. Access to data can help you firm up bids to win work, to make better cost estimates, and AI and ML are the next phase, providing an eco-system of tools.
A key focus for Intelliwave and APTIM is to increase the availability of data, whether it’s creating a data warehouse for visualisations or increasing integrations to provide additional value. We want to move to a more of an enterprise usage phase – up to now it’s been project based – so more people can access data in real time.