With the Internet already groaning under the weight of d!ck pics and facial scanning increasingly used instead of passwords, do we really want
AI to turn flat images into accurate 3D renderings of their content?
A group of boffins from the University of Nottingham and Kingston University think so, and alarmingly so do 400,000 Internet randoms who have contributed their (we hope) faces to a project to extrapolate 3D faces from non-3D pics.
The researchers achieved the feat of extruding a flat face by using both 2D and 3D visualisations as training models for a convolutional neural network (CNN).
As they say in the abstract of their paper, at arXiv here
and due to be presented at the International Conference on Computer Vision in Venice in October, the resulting model “performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image”.
The good news, then, is that this particular
work only works on faces.
The bad news? The code’s on GitHub
under an MIT licence.
In the University of Nottingham’s announcement
, Dr Yorgos Tzimiropoulos (who supervised PhD aspirants Aaron Jackson and Adrian Bulat) explained that for the research, the group trained the neural network “on 80,000 faces to directly learn to output the 3D facial geometry from a single 2D image”.
At a blind guess, we can easily imagine the Internet has a corpus of millions of other
2D / 3D correspondences that … never mind.
It’s nice to know that the researchers only had good in mind for their work, rather than evil: the university release cites applications like personalising video games, helping people visualise an online purchase like spectacles, or even before and after shots for those contemplating surgery. Above the belt. ®