Notes
Notes - notes.io |
On the other hand, performances of all existing methods varied significantly for the 10 data sets. Because of the design principle, our method can accommodate any new DEG methods naturally.
We address these problems by developing an ensemble model that refines the heterogeneous and inconsistent results of the existing methods by taking accounts into network information such as network propagation and network property. DEG candidates that are predicted with weak evidence by the existing tools are re-classified by our proposed ensemble model for the transcriptome data. Tested on 10 RNA-seq datasets downloaded from gene expression omnibus (GEO), our method showed excellent performance of winning the first place in detecting ground truth (GT) genes in eight datasets and find almost all GT genes in six datasets. On the other hand, performances of all existing methods varied significantly for the 10 data sets. Because of the design principle, our method can accommodate any new DEG methods naturally.Many real world data can be modeled by a graph with a set of nodes interconnected to each other by multiple relationships. Such a rich graph is called multilayer graph or network. Providing useful visualization tools to support the query process for such graphs is challenging. Although many approaches have addressed the visual query construction, few efforts have been done to provide a contextualized exploration of query results and suggestion strategies to refine the original query. This is due to several issues such as i) the size of the graphs ii) the large number of retrieved results and iii) the way they can be organized to facilitate their exploration. In this paper, we present VERTIGo, a novel visual platform to query, explore and support the analysis of large multilayer graphs. VERTIGo provides coordinated views to navigate and explore the large set of retrieved results at different granularity levels. In addition, the proposed system supports the refinement of the query by visual suggestions to guide the user through the exploration process. Two examples and a user study demonstrate how VERTIGo can be used to perform visual analysis (query, exploration, and suggestion) on real world multilayer networks.Eye-tracking technology is being increasingly integrated into mixed reality devices. Although critical applications are being enabled, there are significant possibilities for violating user privacy expectations. We show that there is an appreciable risk of unique user identification even under natural viewing conditions in virtual reality. This identification would allow an app to connect a user's personal ID with their work ID without needing their consent, for example. To mitigate such risks we propose a framework that incorporates gatekeeping via the design of the application programming interface and via software-implemented privacy mechanisms. Our results indicate that these mechanisms can reduce the rate of identification from as much as 85% to as low as 30%. The impact of introducing these mechanisms is less than 1.5° error in gaze position for gaze prediction. Gaze data streams can thus be made private while still allowing for gaze prediction, for example, during foveated rendering. Our approach is the first to support privacy-by-design in the flow of eye-tracking data within mixed reality use cases.Current avatar representations used in immersive VR applications lack features that may be important for supporting natural behaviors and effective communication among individuals. This study investigates the impact of the visual and nonverbal cues afforded by three different types of avatar representations in the context of several cooperative tasks. The avatar types we compared are No_Avatar (HMD and controllers only), Scanned_Avatar (wearing an HMD), and Heal_Avatar (video-see-through). The subjective and objective measures we used to assess the quality of interpersonal communication include surveys of social presence, interpersonal trust, communication satisfaction, and attention to behavioral cues, plus two behavioral measures duration of mutual gaze and number of unique words spoken. We found that participants reported higher levels of trustworthiness in the Real_Avatar condition compared to the Scanned_Avatar and No_Avatar conditions. They also reported a greater level of attentional focus on facial expressions compared to the No_Avatar condition and spent more extended time, for some tasks, attempting to engage in mutual gaze behavior compared to the Scanned_Avatar and No_Avatar conditions. In both the Heal_Avatar and Scanned_Avatar conditions, participants reported higher levels of co-presence compared with the No_Avatar condition. In the Scanned_Avatar condition, compared with the Heal_Avatar and No_Avatar conditions, participants reported higher levels of attention to body posture. Overall, our exit survey revealed that a majority of participants (66.67%) reported a preference for the Real_Avatar, compared with 25.00% for the Scanned_Avatar and 8.33% for the No_Avatar, These findings provide novel insight into how a user's experience in a social VR scenario is affected by the type of avatar representation provided.We present a novel redirected walking controller based on alignment that allows the user to explore large and complex virtual environments, while minimizing the number of collisions with obstacles in the physical environment. Our alignment-based redirection controller, ARC, steers the user such that their proximity to obstacles in the physical environment matches the proximity to obstacles in the virtual environment as closely as possible. To quantify a controller's performance in complex environments, we introduce a new metric, Complexity Ratio (CR), to measure the relative environment complexity and characterize the difference in navigational complexity between the physical and virtual environments. Through extensive simulation-based experiments, we show that ARC significantly outperforms current state-of-the-art controllers in its ability to steer the user on a collision-free path. We also show through quantitative and qualitative measures of performance that our controller is robust in complex environments with many obstacles. Our method is applicable to arbitrary environments and operates without any user input or parameter tweaking, aside from the layout of the environments. We have implemented our algorithm on the Oculus Quest head-mounted display and evaluated its performance in environments with varying complexity. read more Our project website is available at https//ganuna.umd.edu/arc/.Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view direction and experience an entire environment. When using a virtual reality headset, the level of immersion can be improved by leveraging stereoscopic capabilities. Stereoscopic images are generated in pairs, one for the left eye and one for the right eye, and result in providing an important depth cue for the human visual system. For computer generated imagery, rendering proper stereo pairs is well known for a fixed view. However, it is much more difficult to create omnidirectional stereo pairs for a surround-view projection that work well when looking in any direction. One major drawback of traditional omnidirectional stereo images is that they suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the zenith / nadir (north / south pole) of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging - a reduction in the stereo effect near the poles that can minimize binocular misalignment. link2 Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.Human visual attention in immersive virtual reality (VR) is key for many important applications, such as content design, gaze-contingent rendering, or gaze-based interaction. However, prior works typically focused on free-viewing conditions that have limited relevance for practical applications. We first collect eye tracking data of 27 participants performing a visual search task in four immersive VR environments. Based on this dataset, we provide a comprehensive analysis of the collected data and reveal correlations between users' eye fixations and other factors, i.e. users' historical gaze positions, task-related objects, saliency information of the VR content, and users' head rotation velocities. Based on this analysis, we propose FixationNet - a novel learning-based model to forecast users' eye fixations in the near future in VR. We evaluate the performance of our model for free-viewing and task-oriented settings and show that it outperforms the state of the art by a large margin of 19.8% (from a mean error of 2.93° to 2.35°) in free-viewing and of 15.1% (from 2.05° to 1.74°) in task-oriented situations. As such, our work provides new insights into task-oriented attention in virtual environments and guides future work on this important topic in VR research.Haptic sensation plays an important role in providing physical information to users in both real environments and virtual environments. To produce high-fidelity haptic feedback, various haptic devices and tactile rendering methods have been explored in myriad scenarios, and perception deviation between a virtual environment and a real environment has been investigated. However, the tactile sensitivity for touch perception in a virtual environment has not been fully studied; thus, the necessary guidance to design haptic feedback quantitatively for virtual reality systems is lacking. This paper aims to investigate users' tactile sensitivity and explore the perceptual thresholds when users are immersed in a virtual environment by utilizing electrovibration tactile feedback and by generating tactile stimuli with different waveform, frequency and amplitude characteristics. Hence, two psychophysical experiments were designed, and the experimental results were analyzed. We believe that the significance and potential of our study on tactile perceptual thresholds can promote future research that focuses on creating a favorable haptic experience for VR applications.To provide immersive haptic experiences, proxy-based haptic feedback systems for virtual reality (VR) face two central challenges (1) similarity, and (2) colocation. link3 While to solve challenge (1), physical proxy objects need to be sufficiently similar to their virtual counterparts in terms of haptic properties, for challenge (2), proxies and virtual counterparts need to be sufficiently colocated to allow for seamless interactions. To solve these challenges, past research introduced, among others, two successful techniques (a) Dynamic Passive Haptic Feedback (DPHF), a hardware-based technique that leverages actuated props adapting their physical state during the VR experience, and (b) Haptic Retargeting, a software-based technique leveraging hand redirection to bridge spatial offsets between real and virtual objects. Both concepts have, up to now, not ever been studied in combination. This paper proposes to combine both techniques and reports on the results of a perceptual and a psychophysical experiment situated in a proof-of-concept scenario focused on the perception of virtual weight distribution.
Homepage: https://www.selleckchem.com/products/od36.html
|
Notes.io is a web-based application for taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000 notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 12 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team