HomeSample Page

Sample Page Title


Hair is among the most outstanding options of the human physique, impressing with its dynamic qualities that deliver scenes to life. Research have persistently demonstrated that dynamic components have a stronger attraction and fascination than static photos. Social media platforms like TikTok and Instagram witness the day by day sharing of huge portrait photographs as individuals aspire to make their photos each interesting and artistically fascinating. This drive fuels researchers’ exploration into the realm of animating human hair inside nonetheless photos, aiming to supply a vivid, aesthetically pleasing, and delightful viewing expertise.

Latest developments within the area have launched strategies to infuse nonetheless photos with dynamic components, animating fluid substances resembling water, smoke, and fireplace inside the body. But, these approaches have largely ignored the intricate nature of human hair in real-life pictures. This text focuses on the inventive transformation of human hair inside portrait images, which includes translating the image right into a cinemagraph.

A cinemagraph represents an modern quick video format that enjoys favor amongst skilled photographers, advertisers, and artists. It finds utility in varied digital mediums, together with digital ads, social media posts, and touchdown pages. The fascination for cinemagraphs lies of their capability to merge the strengths of nonetheless photos and movies. Sure areas inside a cinemagraph characteristic refined, repetitive motions in a brief loop, whereas the rest stays static. This distinction between stationary and shifting components successfully captivates the viewer’s consideration.

By the transformation of a portrait photograph right into a cinemagraph, full with refined hair motions, the thought is to reinforce the photograph’s attract with out detracting from the static content material, making a extra compelling and interesting visible expertise.

Current methods and industrial software program have been developed to generate high-fidelity cinemagraphs from enter movies by selectively freezing sure video areas. Sadly, these instruments should not appropriate for processing nonetheless photos. In distinction, there was a rising curiosity in still-image animation. Most of those approaches have centered on animating fluid components resembling clouds, water, and smoke. Nevertheless, the dynamic conduct of hair, composed of fibrous supplies, presents a particular problem in comparison with fluid components. In contrast to fluid aspect animation, which has obtained in depth consideration, the animation of human hair in actual portrait photographs has been comparatively unexplored.

Animating hair in a static portrait photograph is difficult as a result of intricate complexity of hair buildings and dynamics. In contrast to the sleek surfaces of the human physique or face, hair contains tons of of hundreds of particular person elements, leading to complicated and non-uniform buildings. This complexity results in intricate movement patterns inside the hair, together with interactions with the pinnacle. Whereas there are specialised methods for modeling hair, resembling utilizing dense digital camera arrays and high-speed cameras, they’re usually pricey and time-consuming, limiting their practicality for real-world hair animation.

The paper introduced on this article introduces a novel AI technique for mechanically animating hair inside a static portrait photograph, eliminating the necessity for consumer intervention or complicated {hardware} setups. The perception behind this strategy lies within the human visible system’s lowered sensitivity to particular person hair strands and their motions in actual portrait movies, in comparison with artificial strands inside a digitalized human in a digital setting. The proposed answer is to animate “hair wisps” as a substitute of particular person strands, making a visually pleasing viewing expertise. To attain this, the paper introduces a hair wisp animation module, enabling an environment friendly and automatic answer. An outline of this framework is illustrated under.

The important thing problem on this context is the way to extract these hair wisps. Whereas associated work, resembling hair modeling, has centered on hair segmentation, these approaches primarily goal the extraction of the complete hair area, which differs from the target. To extract significant hair wisps, the researchers innovatively body hair wisp extraction for instance segmentation downside, the place a person phase inside a nonetheless picture corresponds to a hair wisp. By adopting this downside definition, the researchers leverage occasion segmentation networks to facilitate the extraction of hair wisps. This not solely simplifies the hair wisp extraction downside but additionally allows the usage of superior networks for efficient extraction. Moreover, the paper presents the creation of a hair wisp dataset containing actual portrait photographs to coach the networks, together with a semi-annotation scheme to provide ground-truth annotations for the recognized hair wisps. Some pattern outcomes from the paper are reported within the determine under in contrast with state-of-the-art methods.

This was the abstract of a novel AI framework designed to remodel nonetheless portraits into cinemagraphs by animating hair wisps with pleasing motions with out noticeable artifacts. If you’re and wish to be taught extra about it, please be at liberty to seek advice from the hyperlinks cited under.


Take a look at the Paper and Venture Web pageAll Credit score For This Analysis Goes To the Researchers on This Venture. Additionally, don’t neglect to affix our 31k+ ML SubReddit, 40k+ Fb Group, Discord Channel, and Electronic mail E-newsletter, the place we share the newest AI analysis information, cool AI tasks, and extra.

Should you like our work, you’ll love our publication..

We’re additionally on WhatsApp. Be part of our AI Channel on Whatsapp..


Daniele Lorenzi obtained his M.Sc. in ICT for Web and Multimedia Engineering in 2021 from the College of Padua, Italy. He’s a Ph.D. candidate on the Institute of Data Expertise (ITEC) on the Alpen-Adria-Universität (AAU) Klagenfurt. He’s presently working within the Christian Doppler Laboratory ATHENA and his analysis pursuits embrace adaptive video streaming, immersive media, machine studying, and QoS/QoE analysis.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles