Notes
Notes - notes.io |
Combined electric and acoustic stimulation (EAS) has demonstrated better speech recognition than conventional cochlear implant (CI) and yielded satisfactory performance under quiet conditions. However, when noise signals are involved, both the electric signal and the acoustic signal may be distorted, thereby resulting in poor recognition performance. To suppress noise effects, speech enhancement (SE) is a necessary unit in EAS devices. Recently, a time-domain speech enhancement algorithm based on the fully convolutional neural networks (FCN) with a short-time objective intelligibility (STOI)-based objective function (termed FCN(S) in short) has received increasing attention due to its simple structure and effectiveness of restoring clean speech signals from noisy counterparts. With evidence showing the benefits of FCN(S) for normal speech, this study sets out to assess its ability to improve the intelligibility of EAS simulated speech. Objective evaluations and listening tests were conducted to examine the performance of FCN(S) in improving the speech intelligibility of normal and vocoded speech in noisy environments. The experimental results show that, compared with the traditional minimum-mean square-error SE method and the deep denoising autoencoder SE method, FCN(S) can obtain better gain in the speech intelligibility for normal as well as vocoded speech. This study, being the first to evaluate deep learning SE approaches for EAS, confirms that FCN(S) is an effective SE approach that may potentially be integrated into an EAS processor to benefit users in noisy environments.The interaction between the prescribed prosthetic knee and foot is critical to the safety of transfemoral prosthesis users primarily during the stance phase of the gait, when knee buckling can result in a fall. Nonetheless, there is still a need for standardized approaches to quantify the effects of prosthetic component interactions and associated mechanical function on user gait biomechanics. https://www.selleckchem.com/products/bay-293.html A numerical model was defined to simulate sagittal plane prosthetic limb stance based on a single inverted pendulum and predict effects of prosthetic knee alignment and foot stiffness on knee moment to identify optimal solutions. Model validation against laboratory gait data suggests it is appropriate to preliminary simulate prosthetic gait during single-limb support, when prosthetic knee stability may be most at risk given reliance on the prosthetic limb and proximal anatomy, but only for knees with flexion smaller than 4°. Model predictions identify a solution space containing those combinations of knee alignment and foot stiffness (via roll-over shape radius) guaranteeing knee stability in early and mid- single-limb support, whilst facilitating knee break at the end of it. Specifically, a posterior to in-line knee alignment should be combined with low to medium ankle-foot stiffness, whereas anterior knee alignments and rigid feet should likely be avoided. link2 Clinicians can use these solution spaces to optimize transfemoral prostheses including knees with little to no change in stance flexion, ensuring the safety of users. Model prediction can further inform in-vivo investigations on commercial device interactions, providing evidence for future Clinical Practice Guidelines on transfemoral prostheses design.We present an original workflow for structuring a point cloud generated from several scans. Our representation is based on a set of local graphs. Each graph is constructed from the depth map provided by each scan. The graphs are then connected together via the overlapping areas, and careful consideration of the redundant points in these regions leads to a piecewise and globally consistent structure for the underlying surface sampled by the point cloud. The proposed workflow allows structuring aggregated point clouds, scan after scan, whatever the number of acquisitions and the number of points per acquisition, even on computers with very limited memory capacities. To show that our structure can be highly relevant for the community, where the gigantic amount of data represents a real scientific challenge per se, we present an algorithm based on this structure capable of resampling billions of points on standard computers. This application is particularly attractive for simplifying and visualizing gigantic point clouds representing very large-scale scenes (buildings, urban scenes, historical sites), which often require a prohibitive number of points to describe them accurately.Efficient layout of large-scale graphs remains a challenging problem the force-directed and dimensionality reduction-based methods suffer from high overhead for graph distance and gradient computation. In this paper, we present a new graph layout algorithm, called DRGraph, that enhances the nonlinear dimensionality reduction process with three schemes approximating graph distances by means of a sparse distance matrix, estimating the gradient by using the negative sampling technique, and accelerating the optimization process through a multi-level layout scheme. DRGraph achieves a linear complexity for the computation and memory consumption, and scales up to large-scale graphs with millions of nodes. Experimental results and comparisons with state-of-the-art graph layout methods demonstrate that DRGraph can generate visually comparable layouts with a faster running time and a lower memory requirement.Pedestrian detection relying on deep convolution neural networks has made significant progress. Though promising results have been achieved on standard pedestrians, the performance on heavily occluded pedestrians remains far from satisfactory. The main culprits are intra-class occlusions involving other pedestrians and inter-class occlusions caused by other objects, such as cars and bicycles. These result in a multitude of occlusion patterns. We propose an approach for occluded pedestrian detection with the following contributions. First, we introduce a novel mask-guided attention network that fits naturally into popular pedestrian detection pipelines. Our attention network emphasizes on visible pedestrian regions while suppressing the occluded ones by modulating full body features. Second, we propose the occlusion-sensitive hard example mining method and occlusion-sensitive loss that mines hard samples according to the occlusion level and assigns higher weights to the detection errors occurring at highly occluded pedestrians. Third, we empirically demonstrate that weak box-based segmentation annotations provide reasonable approximation to their dense pixel-wise counterparts. Experiments are performed on CityPersons, Caltech and ETH datasets. Our approach sets a new state-of-the-art on all three datasets. Our approach obtains an absolute gain of 10.3% in log-average miss rate, compared with the best reported results on the heavily occluded HO pedestrian set of the CityPersons test set. Code and models are available at https//github.com/Leotju/MGAN.This paper presents a novel framework to extract highly compact and discriminative features for face video retrieval tasks using the deep convolutional neural network (CNN). The face video retrieval task is to find the videos containing the face of a specific person from a database with a face image or a face video of the same person as a query. A key challenge is to extract discriminative features with small storage space from face videos with large intra-class variations caused by different angle, illumination, and facial expression. In recent years, the CNN-based binary hashing and metric learning methods showed notable progress in image/video retrieval tasks. However, the existing CNN-based binary hashing and metric learning have limitations in terms of inevitable information loss and storage inefficiency, respectively. To cope with these problems, the proposed framework consists of two parts first, a novel loss function using a radial basis function kernel (RBF Loss) is introduced to train a neural network to generate compact and discriminative high-level features, and secondly, an optimized quantization using a logistic function (Logistic Quantization) is suggested to convert a real-valued feature to a 1-byte integer with the minimum information loss. Through the face video retrieval experiments on a challenging TV series data set (ICT-TV), it is demonstrated that the proposed framework outperforms the existing state-of-the-art feature extraction methods. Furthermore, the effectiveness of RBF loss was also demonstrated through the image classification and retrieval experiments on the CIFAR-10 and Fashion-MNIST data sets with LeNet-5.Spherical-omnidirectional acoustic source has become a powerful tool to provides a near-ideal omnidirectional beam pattern for acoustic tests and communications. Current spherical-omnidirectional acoustic sources do not combine an omnidirectional beam pattern with high transmitting voltage response in the frequency range above 200 kHz. This work presents the design, fabrication and measurements of a high frequency spherical-omnidirectional transducer that can provides a near-ideal omnidirectional beam pattern and a high transmitting voltage response. The active element of transducer consists of six identical square coupons with spherical curvature 1-3 piezoelectric composites operating in thickness mode. Electroacoustic responses of fabricated transducer in water were measured. The measured resonance frequency of transducer was 280 kHz. The maximum transmitting voltage response was 161.3 dB re 1μPa/V@1m. The horizontal and vertical beam width of transducer were 360° and 346°, respectively. Measurements show that the spherical piezoelectric composite transducer have a favorable spherical-omnidirectional behavior and a high transmitting voltage response at high frequency. These results demonstrate that the spherical piezoelectric composite transducer is potentially a strong candidate for high frequency underwater acoustic source that require an omnidirectional response.During the COVID-19 pandemic, an ultraportable ultrasound smart probe has proven to be one of the few practical diagnostic and monitoring tools for doctors who are fully covered with personal protective equipment. link3 The real-time, safety, ease of sanitization, and ultraportability features of an ultrasound smart probe make it extremely suitable for diagnosing COVID-19. In this article, we discuss the implementation of a smart probe designed according to the classic architecture of ultrasound scanners. The design balanced both performance and power consumption. This programmable platform for an ultrasound smart probe supports a 64-channel full digital beamformer. The platform's size is smaller than 10 cm ×5 cm. It achieves a 60-dBFS signal-to-noise ratio (SNR) and an average power consumption of ~4 W with 80% power efficiency. The platform is capable of achieving triplex B-mode, M-mode, color, pulsed-wave Doppler mode imaging in real time. The hardware design files are available for researchers and engineers for further study, improvement or rapid commercialization of ultrasound smart probes to fight COVID-19.Climate models play a significant role in the understanding of climate change, and the effective presentation and interpretation of their results is important for both the scientific community and the general public. In the case of the latter audience-which has become increasingly concerned with the implications of climate change for society-there is a requirement for visualizations which are compelling and engaging. We describe the use of ParaView, a well-established visualization application, to produce images and animations of results from a large set of modeling experiments, and their use in the promulgation of climate research results. Visualization can also make useful contributions to development, particularly for complex large-scale applications such as climate models. We present early results from the construction of a next-generation climate model which has been designed for use on exascale compute platforms, and show how visualization has helped in the development process, particularly with regard to higher model resolutions and novel data representations.
Homepage: https://www.selleckchem.com/products/bay-293.html
|
Notes.io is a web-based application for taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000 notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 12 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team