Notes![what is notes.io? What is notes.io?](/theme/images/whatisnotesio.png)
![]() ![]() Notes - notes.io |
tive in terms of both storage and computation efficiency. Thus, we conclude that structured pruning has a greater potential compared to non-structured pruning. We encourage the community to focus on studying the DNN inference acceleration with structured sparsity.Surface exploration in virtual reality has a large potential to enrich the user's experience. It could for example be used to train and simulate medical palpation. During palpation users tap, indent, rub in-contact and retract at the surface of a sample to estimate its underlying properties. However, up to now there is no good approach to render such intricate interaction realistically. This paper introduces 6~degree of freedom (DoF) encountered-type haptic display technology for simulating surface exploration tasks. From the different phases of exploration, the focus lies on the in-contact sliding phase. Two novel control approaches to render 'in-contact' sliding over a virtual surface are elaborated. A first rendering method generates lateral frictional forces as the finger slides over the surface. A second method adjusts the inclination of the end-effector to render tissue properties. With both methods a stiff nodule embedded in a soft tissue was encoded in a grid-based manner. User experiments were carried out to find proper parameter and intensity ranges and to confirm the feasibility of the new rendering schemes. Participants indicated that both rendering schemes felt realistic. Compared to earlier work where only the vertical stiffness was altered, lower thresholds to detect and localise embedded virtual nodules were found….MicroRNAs (miRNAs) are a class of non-coding RNAs that play critical role in many biological processes, such as cell growth, development, differentiation and aging. Increasing studies have revealed that miRNAs are closely involved in many humandiseases. Therefore, the prediction of miRNA-disease associations is of great significance to the study of the pathogenesis, diagnosis and intervention of human disease. However, biological experimentally methods are usually expensive in time and money, while computational methods can provide an efficient way to infer the underlying disease-related miRNAs. In this study, we propose a novel method to predict potential miRNA-disease associations, called SVAEMDA. Our method mainly consider the miRNA-disease association prediction as semi-supervised learning problem. SVAEMDA integrates disease semantic similarity, miRNA functional similarity and respective Gaussian interaction profile (GIP) similarities. The integrated similarities are used to learn the representations of diseases and miRNAs. SVAEMDA trains a variational autoencoder based predictor by using known miRNA-disease associations, with the form of concatenated dense vectors. Reconstruction probability of the predictor is used to measure the correlation of the miRNA-disease pairs. Experimental results show that SVAEMDA outperforms other stat-of-the-art methods.The task of image generation started receiving some attention from artists and designers, providing inspiration for new creations. However, exploiting the results of deep generative models such as Generative Adversarial Networks can be long and tedious given the lack of existing tools. In this work, we propose a simple strategy to inspire creators with new generations learned from a dataset of their choice, while providing some control over the output. We design a simple optimization method to find the optimal latent parameters corresponding to the closest generation to any input inspirational image. Specifically, we allow the generation given an inspirational image of the user's choosing by performing several optimization steps to recover optimal parameters from the model's latent space. We tested several exploration methods from classical gradient descents to gradient-free optimizers. Many gradient-free optimizers just need comparisons (better/worse than another image), so they can even be used without numerical criterion nor inspirational image, only with human preferences. Thus, by iterating on one's preferences we can make robust facial composite or fashion generation algorithms. Our results on four datasets of faces, fashion images, and textures show that satisfactory images are effectively retrieved in most cases.Most face recognition methods employ single-bit binary descriptors for face representation. The information from these methods is lost in the process of quantization from real-valued descriptors to binary descriptors, which greatly limits their robustness for face recognition. In this study, we propose a novel weighted feature histogram (WFH) method of multi-scale local patches using multi-bit binary descriptors for face recognition. First, to obtain multi-scale information of the face image, the local patches are extracted using a multi-scale local patch generation (MSLPG) method. Second, with the goal of reducing the quantization information loss of binary descriptors, a novel multi-bit local binary descriptor learning (MBLBDL) method is proposed to extract multi-bit local binary descriptors (MBLBDs). In MBLBDL, a learned mapping matrix and novel multi-bit coding rules are employed to project pixel difference vectors (PDVs) into the MBLBDs in each local patch. Finally, a novel robust weight learning (RWL) m methods.We propose to learn a cascade of globally-optimized modular boosted ferns (GoMBF) to solve multi-modal facial motion regression for real-time 3D facial tracking from a monocular RGB camera. GoMBF is a deep composition of multiple regression models with each is a boosted ferns initially trained to predict partial motion parameters of the same modality, and then concatenated together via a global optimization step to form a singular strong boosted ferns that can effectively handle the whole regression target. It can explicitly cope with the modality variety in output variables, while manifesting increased fitting power and a faster learning speed comparing against the conventional boosted ferns. By further cascading a sequence of GoMBFs (GoMBF-Cascade) to regress facial motion parameters, we achieve competitive tracking performance on a variety of in-the-wild videos comparing to the state-of-the-art methods which either have higher computational complexity or require much more training data. It provides a robust and highly elegant solution to real-time 3D facial tracking using a small set of training data and hence makes it more practical in real-world applications. We further deeply investigate the effect of synthesized facial images on training non-deep learning methods such as GoMBF-Cascade for 3D facial tracking. We apply three types synthetic images with various naturalness levels for training two different tracking methods, and compare the performance of the tracking models trained on real data, on synthetic data and on a mixture of data. The experimental results indicate that, i) the model trained purely on synthetic facial imageries can hardly generalize well to unconstrained real-world data, ii) involving synthetic faces into training benefits tracking in some certain scenarios but degrades the tracking model's generalization ability. These two insights could benefit a range of non-deep learning facial image analysis tasks where the labelled real data is difficult to acquire.Fitting ellipses from unrecognized data is a fundamental problem in computer vision and pattern recognition. Classic least-squares based methods are sensitive to outliers. check details To address this problem, in this paper, we present a novel and effective method called hierarchical Gaussian mixture models (HGMM) for ellipse fitting in noisy, outliers-contained, and occluded settings on the basis of Gaussian mixture models (GMM). This method is crafted into two layers to significantly improve its fitting accuracy and robustness for data containing outliers/noise and has been proven to effectively narrow down the iterative interval of the kernel bandwidth, thereby speeding up ellipse fitting. Extensive experiments are conducted on synthetic data including substantial outliers (up to 60%) and strong noise (up to 200%) as well as on real images including complex benchmark images with heavy occlusion and images from versatile applications. We compare our results with those of representative state-of-the-art methods and demonstrate that our proposed method has several salient advantages, such as its high robustness against outliers and noise, high fitting accuracy, and improved performance.We present a novel method to jointly learn a 3D face parametric model and 3D face reconstruction from diverse sources. Previous methods usually learn 3D face modeling from one kind of source, such as scanned data or in-the-wild images. Although 3D scanned data contain accurate geometric information of face shapes, the capture system is expensive and such datasets usually contain a small number of subjects. On the other hand, in-the-wild face images are easily obtained and there are a large number of facial images. However, facial images do not contain explicit geometric information. In this paper, we propose a method to learn a unified face model from diverse sources. Besides scanned face data and face images, we also utilize a large number of RGB-D images captured with an iPhone X to bridge the gap between the two sources. Experimental results demonstrate that with training data from more sources, we can learn a more powerful face model.The existing image compression methods usually choose or optimize low-level representation manually. Actually, these methods struggle for the texture restoration at low bit rates. Recently, deep neural network (DNN)-based image compression methods have achieved impressive results. To achieve better perceptual quality, generative models are widely used, especially generative adversarial networks (GAN). However, training GAN is intractable, especially for high-resolution images, with the challenges of unconvincing reconstructions and unstable training. To overcome these problems, we propose a novel DNN-based image compression framework in this paper. The key point is decomposing an image into multi-scale sub-images using the proposed Laplacian pyramid based multi-scale networks. For each pyramid scale, we train a specific DNN to exploit the compressive representation. Meanwhile, each scale is optimized with different aspects, including pixel, semantics, distribution and entropy, for a good "rate-distortion-perception" trade-off. By independently optimizing each pyramid scale, we make each stage manageable and make each sub-image plausible. Experimental results demonstrate that our method achieves state-of-the-art performance, with advantages over existing methods in providing improved visual quality. Additionally, a better performance in the down-stream visual analysis tasks which are conducted on the reconstructed images, validates the excellent semantics-preserving ability of the proposed method.Recent progress on salient object detection (SOD) mostly benefits from the explosive development of Convolutional Neural Networks (CNNs). However, much of the improvement comes with the larger network size and heavier computation overhead, which, in our view, is not mobile-friendly and thus difficult to deploy in practice. To promote more practical SOD systems, we introduce a novel Stereoscopically Attentive Multi-scale (SAM) module, which adopts a stereoscopic attention mechanism to adaptively fuse the features of various scales. Embarking on this module, we propose an extremely lightweight network, namely SAMNet, for SOD. Extensive experiments on popular benchmarks demonstrate that the proposed SAMNet yields comparable accuracy with state-of-the-art methods while running at a GPU speed of 343fps and a CPU speed of 5fps for 336 ×336 inputs with only 1.33M parameters. Therefore, SAMNet paves a new path towards SOD. The source code is available on the project page https//mmcheng.net/SAMNet/.
Here's my website: https://www.selleckchem.com/products/xmd8-92.html
![]() |
Notes is a web-based application for online taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000+ notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 14 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team