
2023 Author: Bryan Walter | [email protected]. Last modified: 2023-05-21 22:24

American and British developers have created an application in which the user can draw a sketch and immediately get an image created by a neural network based on it. The algorithm consists of two parts, one of which completes the sketch, and the second turns the finished picture into a photograph. The development will be presented at the ICCV 2019 conference, an article about it has been published on arXiv.org.
Over the past years, developers have created many algorithms for generating images that are sometimes difficult to distinguish from real photographs. For example, developers from NVIDIA have achieved noticeable results, teaching neural networks to synthesize photographs of non-existent people, as well as realistic videos. Later, developers began to create programs available to ordinary users in which it is easy to turn a simple sketch or color drawing into a photorealistic image. However, these programs either work not in real time, or require the user to draw a complete sketch on their own.
Developers led by Eli Shechtman from Adobe Research have created an application that draws a sketch in real time and turns it into a synthesized photo. The application consists of two windows and auxiliary buttons. In the first window, the user draws a sketch, and in the second he sees the image created by neural networks. To begin with, the user selects an object class, for example, pineapple, and the algorithm immediately produces a typical sketch for it. After the start of drawing, the neural network constantly updates the sketch, complementing the part drawn by the user.
The authors split the problem into two parts and used a combination of two generative adversarial neural networks to solve it. At the first stage, the algorithm, trained on the sketches of a certain class, receives the user's initial sketch and draws it up to complete. At the second stage, a realistic image is formed on the basis of the finished sketch.

The scheme of the algorithm
The developers have created their own dataset for training neural networks, consisting of photographs and sketches of ten types of objects. The sketches were created automatically based on the edges of objects in photographs. In their work, the authors tested two schemes, and created not only ten separate neural network models for each class, but also a multiclass generator that creates different images depending on the conditional vector.

Program interface
In addition to publishing an article on development, the authors also published the application source code for Linux and macOS on GitHub, as well as brief documentation.
Curiously, there is also a reverse project that turns photographs into sketches. Last year, an Australian engineer created a cardboard camera that, at the push of a button, takes a photo, converts it into a sketch, and then prints immediately to its built-in thermal printer.