Notes
Notes - notes.io |
These results indicate that rotational self-motion cues are important when teleporting, and that navigation can be improved by enabling piloting.In mixed reality (MR), augmenting virtual objects consistently with real-world illumination is one of the key factors that provide a realistic and immersive user experience. For this purpose, we propose a novel deep learning-based method to estimate high dynamic range (HDR) illumination from a single RGB image of a reference object. To obtain illumination of a current scene, previous approaches inserted a special camera in that scene, which may interfere with user's immersion, or they analyzed reflected radiances from a passive light probe with a specific type of materials or a known shape. The proposed method does not require any additional gadgets or strong prior cues, and aims to predict illumination from a single image of an observed object with a wide range of homogeneous materials and shapes. To effectively solve this ill-posed inverse rendering problem, three sequential deep neural networks are employed based on a physically-inspired design. These networks perform end-to-end regression to gradually decrease dependency on the material and shape. To cover various conditions, the proposed networks are trained on a large synthetic dataset generated by physically-based rendering. Finally, the reconstructed HDR illumination enables realistic image-based lighting of virtual objects in MR. Experimental results demonstrate the effectiveness of this approach compared against state-of-the-art methods. The paper also suggests some interesting MR applications in indoor and outdoor scenes.Fitts's law facilitates approximate comparisons of target acquisition performance across a variety of settings. Conceptually, also the index of difficulty of 3D object manipulation with six degrees of freedom can be computed, which allows the comparison of results from different studies. Prior experiments, however, often revealed much worse performance than one would reasonably expect on this basis. We argue that this discrepancy stems from confounding variables and show how Fitts's law and related research methods can be applied to isolate and identify relevant factors of motor performance in 3D manipulation tasks. The results of a formal user study (n=21) demonstrate competitive performance in compliance with Fitts's model and provide empirical evidence that simultaneous 3D rotation and translation can be beneficial.There has been an increasing demand for interior design and decorating. STZ inhibitor price The main challenges are where to put the objects and how to put them plausibly on the given domain. In this paper, we propose an automatic method for decorating the planes in a given image. We call it Decoration In (DecorIn for short). Given an image, we first extract planes as decorating candidates according to the estimated geometric features. Then we parameterize the planes with an orthogonal and semantically consistent grid. Finally, we compute the position for the decoration, i.e., a decoration box, on the plane by an example-based decorating method which can describe the partial image and compute the similarity between partial scenes. We conduct comprehensive evaluations and demonstrate our method on plentiful applications. Our method is more efficient both in time and economic than generating a layout from scratch.In this paper, we introduce two local surface averaging operators with local inverses and use them to devise a method for surface multiresolution (subdivision and reverse subdivision) of arbitrary degree. Similar to previous works by Stam, Zorin, and Schroder that achieved forward subdivision only, our averaging operators involve only direct neighbours of a vertex, and can be configured to generalize B-Spline multiresolution to arbitrary topology surfaces. Our subdivision surfaces are hence able to exhibit Cd continuity at regular vertices (for arbitrary values of d) and appear to exhibit C1 continuity at extraordinary vertices. Smooth reverse and non-uniform subdivisions are additionally supported.Recently, deep learning based video super-resolution (SR) methods combine the convolutional neural networks (CNN) with motion compensation to estimate a high-resolution (HR) video from its low-resolution (LR) counterpart. However, most previous methods conduct downscaling motion estimation to handle large motions, which can lead to detrimental effects on the accuracy of motion estimation due to the reduction of spatial resolution. Besides, these methods usually treat different types of intermediate features equally, which lack flexibility to emphasize meaningful information for revealing the high-frequency details. In this paper, to solve above issues, we propose a deep dual attention network (DDAN), including a motion compensation network (MCNet) and a SR reconstruction network (ReconNet), to fully exploit the spatio-temporal informative features for accurate video SR. The MCNet progressively learns the optical flow representations to synthesize the motion information across adjacent frames in a pyramid fashion. To decrease the mis-registration errors caused by the optical flow based motion compensation, we extract the detail components of original LR neighboring frames as complementary information for accurate feature extraction. In the ReconNet, we implement dual attention mechanisms on a residual unit and form a residual attention unit to focus on the intermediate informative features for high-frequency details recovery. Extensive experimental results on numerous datasets demonstrate the proposed method can effectively achieve superior performance in terms of quantitative and qualitative assessments compared with state-of-the-art methods.Driven by recent advances in human-centered computing, Facial Expression Recognition (FER) has attracted significant attention in many applications. However, most conventional approaches either perform face frontalization on a non-frontal facial image or learn separate classifier for each pose. Different from existing methods, this paper proposes an end-to-end deep learning model that allows to simultaneous facial image synthesis and pose-invariant facial expression recognition by exploiting shape geometry of the face image. The proposed model is based on generative adversarial network (GAN) and enjoys several merits. First, given an input face and a target pose and expression designated by a set of facial landmarks, an identity-preserving face can be generated through guiding by the target pose and expression. Second, the identity representation is explicitly disentangled from both expression and pose variations through the shape geometry delivered by facial landmarks. Third, our model can automatically generate face images with different expressions and poses in a continuous way to enlarge and enrich the training set for the FER task.
Homepage: https://www.selleckchem.com/products/Streptozotocin.html
|
Notes.io is a web-based application for taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000 notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 12 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team