Notes![what is notes.io? What is notes.io?](/theme/images/whatisnotesio.png)
![]() ![]() Notes - notes.io |
Our results show that our system generalizes well to different movie frames, achieving better results than existing solutions.The recent advance in motion tracking (e.g., Visual Inertial Odometry) allows the use of a mobile phone as a 3D pen, thus significantly benefiting various mobile Augmented Reality (AR) applications based on 3D curve creation. However, when creating 3D curves on and around physical objects with mobile AR, tracking might be less robust or even lost due to camera occlusion or textureless scenes. This motivates us to study how to achieve natural interaction with minimum tracking errors during close interaction between a mobile phone and physical objects. To this end, we contribute an elicitation study on input point and phone grip, and a quantitative study on tracking errors. Based on the results, we present a system for direct 3D drawing with an AR-enabled mobile phone as a 3D pen, and interactive correction of 3D curves with tracking errors in mobile AR. We demonstrate the usefulness and effectiveness of our system for two applications in-situ 3D drawing, and direct 3D measurement.Diffuse reverberation is ultrasound image noise caused by multiple reflections of the transmitted pulse before returning to the transducer, which degrades image quality and impedes the estimation of displacement or flow in techniques such as elastography and Doppler imaging. Diffuse reverberation appears as spatially incoherent noise in the channel signals, where it also degrades the performance of adaptive beamforming methods, sound speed estimation, and methods that require measurements from channel signals. In this paper, we propose a custom 3D fully convolutional neural network (3DCNN) to reduce diffuse reverberation noise in the channel signals. The 3DCNN was trained with channel signals from simulations of random targets that include models of reverberation and thermal noise. It was then evaluated both on phantom and in-vivo experimental data. The 3DCNN showed improvements in image quality metrics such as generalized contrast to noise ratio (GCNR), lag one coherence (LOC) contrast-to-noise ratio (CNR) and contrast for anechoic regions in both phantom and in-vivo experiments. Visually, the contrast of anechoic regions was greatly improved. The CNR was improved in some cases, however the 3DCNN appears to strongly remove uncorrelated and low amplitude signal. In images of in-vivo carotid artery and thyroid, the 3DCNN was compared to short-lag spatial coherence (SLSC) imaging and spatial prediction filtering (FXPF) and demonstrated improved contrast, GCNR, and LOC, while FXPF only improved contrast and SLSC only improved CNR.This paper addresses the task of detecting and recognizing human-object interactions (HOI) in images. Considering the intrinsic complexity and structural nature of the task, we introduce a cascaded parsing network (CP-HOI) for a multi-stage, structured HOI understanding. At each cascade stage, an instance detection module progressively refines HOI proposals and feeds them into a structured interaction reasoning module. Each of the two modules is also connected to its predecessor in the previous stage. The structured interaction reasoning module is built upon a graph parsing neural network (GPNN). In particular, GPNN infers a parse graph that i) interprets meaningful HOI structures by a learnable adjacency matrix, and ii) predicts action (edge) labels. Within an end-to-end, message-passing framework, GPNN blends learning and inference, iteratively parsing HOI structures and reasoning HOI representations (i.e., instance and relation features). Further beyond relation detection at a bounding-box level, we make our framework flexible to perform fine-grained pixel-wise relation segmentation; this provides a new glimpse into better relation modeling. A preliminary version of our CP-HOI model reached 1st place in the ICCV2019 Person in Context Challenge, on both relation detection and segmentation. Our CP-HOI shows promising results on two popular HOI recognition benchmarks, i.e., V-COCO and HICO-DET.
Asthma and chronic obstructive pulmonary disease (COPD) can be confused in clinical diagnosis due to overlapping symptoms. The purpose of this study is to develop a method based on multivariate pulmonary sounds analysis for differential diagnosis of the two diseases.
The recorded 14-channel pulmonary sound data are mathematically modeled using multivariate (or, vector) autoregressive (VAR) model, and the model parameters are fed to the classifier. Separate classifiers are assumed for each of the six sub-phases of flow cycle, namely, early/mid/late inspiration and expiration, and the six decisions are combined to reach the final decision. Parameter classification is performed in the Bayesian framework with the assumption of Gaussian mixture model (GMM) for the likelihoods, and the six sub-phase decisions are combined by voting, where the weights are learned by a linear support vector machine (SVM) classifier. Fifty subjects are incorporated in the study, 30 being diagnosed with asthma and 20 with COPD.
The highest accuracy of the classifier is 98 percent, corresponding to correct classification rates of 100 and 95 percent for asthma and COPD, respectively. The prominent sub-phase to differentiate between the two diseases is found to be mid-inspiration.
The methodology proves to be promising in terms of asthma-COPD differentiation based on acoustic information. The results also reveal that the six sub-phases are not equally pertinent in the differentiation.
Pulmonary sounds analysis may be a complementary tool in clinical practice for differential diagnosis of asthma and COPD, especially in the absence of reliable spirometric testing.
Pulmonary sounds analysis may be a complementary tool in clinical practice for differential diagnosis of asthma and COPD, especially in the absence of reliable spirometric testing.High-frequency irreversible electroporation (H-FIRE) is a tissue ablation modality employing bursts of electrical pulses in a positive phaseinterphase delay (d1)negative phaseinterpulse delay (d2) pattern. Despite accumulating evidence suggesting the significance of these delays, their effects on therapeutic outcomes from clinically-relevant H-FIRE waveforms have not been studied extensively.
We sought to determine whether modifications to the delays within H-FIRE bursts could yield a more desirable clinical outcome in terms of ablation volume versus extent of tissue excitation.
We used a modified spatially extended nonlinear node (SENN) nerve fiber model to evaluate excitation thresholds for H-FIRE bursts with varying delays. selleck We then calculated non-thermal tissue ablation, thermal damage, and excitation in a clinically relevant numerical model.
Excitation thresholds were maximized by shortening d1, and extension of d2 up to 1,000 's increased excitation thresholds by at least 60% versus symmetric bursts.
My Website: https://www.selleckchem.com/products/pirfenidone.html
![]() |
Notes is a web-based application for online taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000+ notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 14 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team