For silent video, Neuronet gathers up sounds
Neuronet gathers up sounds |
Instead of just overlaying music, editors who want to recreate the true sounds of a scene must painstakingly choose matching sounds from a library.
Developers have built an algorithm that can perform this task in the absence of a human.
The programme begins by detecting sound sources in the frame.
The CLIP neural network then uses the Epidemic sound effect database to classify the objects in it.
As a consequence, the five most likely effects for items and surroundings are offered for each scene.
The system chooses one by default, but the user has the option of adding more.
Post a Comment