Notes
Notes - notes.io |
Video captioning via encoder-decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance. Such feature extraction networks are weight-freezing states and are based on convolution neural networks (CNNs). However, these traditional feature extraction methods have some problems. First, when the feature extraction model is used in conjunction with freezing, additional learning of the feature extraction model is not possible by exploiting the backpropagation of the loss obtained from the video captioning training. Specifically, this blocks feature extraction models from learning more about spatial information. Second, the complexity of the model is further increased when multiple CNNs are used. Additionally, the author of Vision Transformers (ViTs) pointed out the inductive bias of CNN called the local receptive field. Therefore, we propose the full transformer structure that uses an end-to-end learning method for video captioning to overcome this problem. As a feature extraction model, we use a vision transformer (ViT) and propose feature extraction gates (FEGs) to enrich the input of the captioning model through that extraction model. Additionally, we design a universal encoder attraction (UEA) that uses all encoder layer outputs and performs self-attention on the outputs. The UEA is used to address the lack of information about the video's temporal relationship because our method uses only the appearance feature. We will evaluate our model against several recent models on two benchmark datasets and show its competitive performance on MSRVTT/MSVD datasets. We show that the proposed model performed captioning using only a single feature, but in some cases, it was better than the others, which used several features.In the last decades, data-driven methods have gained great popularity in the industry, supported by state-of-the-art advancements in machine learning. These methods require a large quantity of labeled data, which is difficult to obtain and mostly costly and challenging. To address these challenges, researchers have turned their attention to unsupervised and few-shot learning methods, which produced encouraging results, particularly in the areas of computer vision and natural language processing. With the lack of pretrained models, time series feature learning is still considered as an open area of research. This paper presents an efficient two-stage feature learning approach for anomaly detection in machine processes, based on a prototype few-shot learning technique that requires a limited number of labeled samples. The work is evaluated on a real-world scenario using the publicly available CNC Machining dataset. The proposed method outperforms the conventional prototypical network and the feature analysis shows a high generalization ability achieving an F1-score of 90.3%. The comparison with handcrafted features proves the robustness of the deep features and their invariance to data shifts across machines and time periods, which makes it a reliable method for sensory industrial applications.Conventional mobile robots employ LIDAR for indoor global positioning and navigation, thus having strict requirements for the ground environment. Under the complicated ground conditions in the greenhouse, the accumulative error of odometer (ODOM) that arises from wheel slip is easy to occur during the long-time operation of the robot, which decreases the accuracy of robot positioning and mapping. ROC325 To solve the above problem, an integrated positioning system based on UWB (ultra-wideband)/IMU (inertial measurement unit)/ODOM/LIDAR is proposed. First, UWB/IMU/ODOM is integrated by the Extended Kalman Filter (EKF) algorithm to obtain the estimated positioning information. Second, LIDAR is integrated with the established two-dimensional (2D) map by the Adaptive Monte Carlo Localization (AMCL) algorithm to achieve the global positioning of the robot. As indicated by the experiments, the integrated positioning system based on UWB/IMU/ODOM/LIDAR effectively reduced the positioning accumulative error of the robot in the greenhouse environment. At the three moving speeds, including 0.3 m/s, 0.5 m/s, and 0.7 m/s, the maximum lateral error is lower than 0.1 m, and the maximum lateral root mean square error (RMSE) reaches 0.04 m. For global positioning, the RMSEs of the x-axis direction, the y-axis direction, and the overall positioning are estimated as 0.092, 0.069, and 0.079 m, respectively, and the average positioning time of the system is obtained as 72.1 ms. This was sufficient for robot operation in greenhouse situations that need precise positioning and navigation.The article discusses the physical foundations of the application of the linear magnetoelectric (ME) effect in composites for devices in the low-frequency range, including the electromechanical resonance (EMR) region. The main theoretical expressions for the ME voltage coefficients in the case of a symmetric and asymmetric composite structure in the quasi-static and resonant modes are given. The area of EMR considered here includes longitudinal, bending, longitudinal shear, and torsional modes. Explanations are given for finding the main resonant frequencies of the modes under study. Comparison of theory and experimental results for some composites is given.In this study, an ultra-high-sensitivity fiber humidity sensor with a chitosan film cascaded Fabry-Perot interferometer (FPI) based on the harmonic Vernier effect (HVE) is proposed and demonstrated. The proposed sensor can break the limitation of the strict optical path length matching condition in a traditional Vernier effect (TVE) FPI to achieve ultra-high sensitivity through the adjustment of the harmonic order of the HVE FPI. The intersection of the internal envelope tracking method allows spectra demodulation to no longer be limited by the size of the FSR of the FPI. The sensitivity of the proposed sensor is -83.77 nm/%RH, with a magnification of -53.98 times. This work acts as an excellent guide in the fiber sensing field for the further achievement of ultra-high sensitivity.In this paper, we present a calculation method for the radiation response eigenvalue based on a monolithic active pixel sensor. By comparing the statistical eigenvalues of different regions of a pixel array in bright and dark environments, the linear relationship between the statistical eigenvalues obtained by different algorithms and the radiation dose rate was studied. Additionally, a dose rate characterization method based on the analysis of the eigenvalues of the MAPS response signal was proposed. The experimental results show that in the dark background environment, the eigenvalues had a good linear response in the region of any gray value in the range of 10-30. In the color images, due to the difference in the background gray values in adjacent color regions, the radiation response signal in dark regions was confused with the image information in bright regions, resulting in the loss of response signal and affecting the analysis results of the radiation response signal. For the low dose rate radiation field, as the radiation response signal was too weak and there was background dark noise, it was necessary to accumulate frame images to obtain a sufficient response signal. For the intense radiation field, the number of response events in a single image was very high, and only two consecutive frames of image data needed to be accumulated to meet the statistical requirements. The binarization method had a good characterization effect for the radiation at a low dose rate, and the binarization processing and the total gray value statistics of the response data at a high dose rate could better characterize the radiation dose rate. The calibration experiment results show that the binarization processing method can meet the requirements of using a MAPS for wide-range detection.Block compressed sensing (BCS) is suitable for image sampling and compression in resource-constrained applications. Adaptive sampling methods can effectively improve the rate-distortion performance of BCS. However, adaptive sampling methods bring high computational complexity to the encoder, which loses the superiority of BCS. In this paper, we focus on improving the adaptive sampling performance at the cost of low computational complexity. Firstly, we analyze the additional computational complexity of the existing adaptive sampling methods for BCS. Secondly, the adaptive sampling problem of BCS is modeled as a distortion minimization problem. We present three distortion models to reveal the relationship between block sampling rate and block distortion and use a simple neural network to predict the model parameters from several measurements. Finally, a fast estimation method is proposed to allocate block sampling rates based on distortion minimization. The results demonstrate that the proposed estimation method of block sampling rates is effective. Two of the three proposed distortion models can make the proposed estimation method have better performance than the existing adaptive sampling methods of BCS. Compared with the calculation of BCS at the sampling rate of 0.1, the additional calculation of the proposed adaptive sampling method is less than 1.9%.The concept of synergy has drawn attention and been applied to lower limb assistive devices such as exoskeletons and prostheses for improving human-machine interaction. A better understanding of the influence of gait kinematics on synergies and a better synergy-modeling method are important for device design and improvement. To this end, gait data from healthy, amputee, and stroke subjects were collected. First, continuous relative phase (CRP) was used to quantify their synergies and explore the influence of kinematics. Second, long short-term memory (LSTM) and principal component analysis (PCA) were adopted to model interlimb synergy and intralimb synergy, respectively. The results indicate that the limited hip and knee range of motions (RoMs) in stroke patients and amputees significantly influence their synergies in different ways. In interlimb synergy modeling, LSTM (RMSE 0.798° (hip) and 1.963° (knee)) has lower errors than PCA (RMSE 5.050° (hip) and 10.353° (knee)), which is frequently used in the literature. Further, in intralimb synergy modeling, LSTM (RMSE 3.894°) enables better synergy modeling than PCA (RMSE 10.312°). In conclusion, stroke patients and amputees perform different compensatory mechanisms to adapt to new interlimb and intralimb synergies different from healthy people. LSTM has better synergy modeling and shows a promise for generating trajectories in line with the wearer's motion for lower limb assistive devices.Quantitatively and accurately monitoring the damage to composites is critical for estimating the remaining life of structures and determining whether maintenance is essential. This paper proposed an active sensing method for damage localization and quantification in composite plates. The probabilistic imaging algorithm and the statistical method were introduced to reduce the impact of composite anisotropy on the accuracy of damage detection. The matching pursuit decomposition (MPD) algorithm was utilized to extract the precise TOF for damage detection. The damage localization was realized by comprehensively evaluating the damage probability evaluation results of all sensing paths in the monitoring area. Meanwhile, the scattering source was recognized on the elliptical trajectory obtained through the TOF of each sensing path to estimate the damage size. Damage size was characterized by the Gaussian kernel probability density distribution of scattering sources. The algorithm was validated by through-thickness hole damages of various locations and sizes in composite plates.
Website: https://www.selleckchem.com/products/roc-325.html
|
Notes.io is a web-based application for taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000 notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 12 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team