NotesWhat is notes.io?

Notes brand slogan

Notes - notes.io

Thermoresponsive interfaces acquired utilizing poly(N-isopropylacrylamide)-based copolymer with regard to bioseparation and cells architectural software.
This article is concerned with fractional-order discontinuous complex-valued neural networks (FODCNNs). Based on a new fractional-order inequality, such system is analyzed as a compact entirety without any decomposition in the complex domain which is different from a common method in almost all literature. First, the existence of global Filippov solution is given in the complex domain on the basis of the theories of vector norm and fractional calculus. Successively, by virtue of the nonsmooth analysis and differential inclusion theory, some sufficient conditions are developed to guarantee the global dissipativity and quasi-Mittag-Leffler synchronization of FODCNNs. Furthermore, the error bounds of quasi-Mittag-Leffler synchronization are estimated without reference to the initial values. Especially, our results include some existing integer-order and fractional-order ones as special cases. Finally, numerical examples are given to show the effectiveness of the obtained theories.Deep neural networks (DNNs) are easily fooled by adversarial examples. Most existing defense strategies defend against adversarial examples based on full information of whole images. In reality, one possible reason as to why humans are not sensitive to adversarial perturbations is that the human visual mechanism often concentrates on most important regions of images. A deep attention mechanism has been applied in many computer fields and has achieved great success. Attention modules are composed of an attention branch and a trunk branch. The encoder/decoder architecture in the attention branch has potential of compressing adversarial perturbations. In this article, we theoretically prove that attention modules can compress adversarial perturbations by destroying potential linear characteristics of DNNs. Considering the distribution characteristics of adversarial perturbations in different frequency bands, we design and compare three types of attention modules based on frequency decomposition and reorganization to defend against adversarial examples. Moreover, we find that our designed attention modules can obtain high classification accuracies on clean images by locating attention regions more accurately. Experimental results on the CIFAR and ImageNet dataset demonstrate that frequency reorganization in attention modules can not only achieve good robustness to adversarial perturbations, but also obtain comparable, even higher classification, accuracies on clean images. Moreover, our proposed attention modules can be integrated with existing defense strategies as components to further improve adversarial robustness.Few-shot learning (FSL) refers to the learning task that generalizes from base to novel concepts with only few examples observed during training. One intuitive FSL approach is to hallucinate additional training samples for novel categories. While this is typically done by learning from a disjoint set of base categories with sufficient amount of training data, most existing works did not fully exploit the intra-class information from base categories, and thus there is no guarantee that the hallucinated data would represent the class of interest accordingly. In this paper, we propose Feature Disentanglement and Hallucination Network (FDH-Net), which jointly performs feature disentanglement and hallucination for FSL purposes. More specifically, our FDH-Net is able to disentangle input visual data into class-specific and appearance-specific features. With both data recovery and classification constraints, hallucination of image features for novel categories using appearance information extracted from base categories can be achieved. We perform extensive experiments on two fine-grained datasets (CUB and FLO) and two coarse-grained ones (mini-ImageNet and CIFAR-100). The results confirm that our framework performs favorably against state-of-the-art metric-learning and hallucination-based FSL models.Most existing unsupervised active learning methods aim at minimizing the data reconstruction loss by using the linear models to choose representative samples for manually labeling in an unsupervised setting. Thus these methods often fail in modelling data with complex non-linear structure. To address this issue, we propose a new deep unsupervised Active Learning method for classification tasks, inspired by the idea of Matrix Sketching, called ALMS. Specifically, ALMS leverages a deep auto-encoder to embed data into a latent space, and then describes all the embedded data with a small size sketch to summarize the major characteristics of the data. In contrast to previous approaches that reconstruct the whole data matrix for selecting the representative samples, ALMS aims to select a representative subset of samples to well approximate the sketch, which can preserve the major information of data meanwhile significantly reducing the number of network parameters. This makes our algorithm alleviate the issue of model overfitting and readily cope with large datasets. Actually, the sketch provides a type of self-supervised signal to guide the learning of the model. Moreover, we propose to construct an auxiliary self-supervised task by classifying real/fake samples, in order to further improve the representation ability of the encoder. We thoroughly evaluate the performance of ALMS on both single-label and multi-label classification tasks, and the results demonstrate its superior performance against the state-of-the-art methods. The code can be found at https//github.com/lrq99/ALMS.Text tracking is to track multiple texts in a video, and construct a trajectory for each text. Existing methods tackle this task by utilizing the tracking-by-detection framework, i.e., detecting the text instances in each frame and associating the corresponding text instances in consecutive frames. We argue that the tracking accuracy of this paradigm is severely limited in more complex scenarios, e.g., owing to motion blur, etc., the missed detection of text instances causes the break of the text trajectory. In addition, different text instances with similar appearance are easily confused, leading to the incorrect association of the text instances. To this end, a novel spatio-temporal complementary text tracking model is proposed in this paper. We leverage a Siamese Complementary Module to fully exploit the continuity characteristic of the text instances in the temporal dimension, which effectively alleviates the missed detection of the text instances, and hence ensures the completeness of each text trajectory. We further integrate the semantic cues and the visual cues of the text instance into a unified representation via a text similarity learning network, which supplies a high discriminative power in the presence of text instances with similar appearance, and thus avoids the mis-association between them. Our method achieves state-of-the-art performance on several public benchmarks. The source code is available at https//github.com/lsabrinax/VideoTextSCM.This paper proposes a dual-supervised uncertainty inference (DS-UI) framework for improving Bayesian estimation-based UI in DNN-based image recognition. In the DS-UI, we combine the classifier of a DNN, i.e., the last fully-connected (FC) layer, with a mixture of Gaussian mixture models (MoGMM) to obtain an MoGMM-FC layer. Unlike existing UI methods for DNNs, which only calculate the means or modes of the DNN outputs' distributions, the proposed MoGMM-FC layer acts as a probabilistic interpreter for the features that are inputs of the classifier to directly calculate the probabilities of them for the DS-UI. In addition, we propose a dual-supervised stochastic gradient-based variational Bayes (DS-SGVB) algorithm for the MoGMM-FC layer optimization. Unlike conventional SGVB and optimization algorithms in other UI methods, the DS-SGVB not only models the samples in the specific class for each Gaussian mixture model (GMM) in the MoGMM, but also considers the negative samples from other classes for the GMM to reduce the intra-class distances and enlarge the inter-class margins simultaneously for enhancing the learning ability of the MoGMM-FC layer in the DS-UI. Experimental results show the DS-UI outperforms the state-of-the-art UI methods in misclassification detection. We further evaluate the DS-UI in open-set out-of-domain/-distribution detection and find statistically significant improvements. Visualizations of the feature spaces demonstrate the superiority of the DS-UI. Codes are available at https//github.com/PRIS-CV/DS-UI.Image-text retrieval aims to capture the semantic correlation between images and texts. Existing image-text retrieval methods can be roughly categorized into embedding learning paradigm and pair-wise learning paradigm. The former paradigm fails to capture the fine-grained correspondence between images and texts. The latter paradigm achieves fine-grained alignment between regions and words, but the high cost of pair-wise computation leads to slow retrieval speed. In this paper, we propose a novel method named MEMBER by using Memory-based EMBedding Enhancement for image-text Retrieval (MEMBER), which introduces global memory banks to enable fine-grained alignment and fusion in embedding learning paradigm. Specifically, we enrich image (resp., text) features with relevant text (resp., image) features stored in the text (resp., image) memory bank. In this way, our model not only accomplishes mutual embedding enhancement across two modalities, but also maintains the retrieval efficiency. Extensive experiments demonstrate that our MEMBER remarkably outperforms state-of-the-art approaches on two large-scale benchmark datasets.RGB-D saliency detection is receiving more and more attention in recent years. There are many efforts have been devoted to this area, where most of them try to integrate the multi-modal information, i.e. RGB images and depth maps, via various fusion strategies. However, some of them ignore the inherent difference between the two modalities, which leads to the performance degradation when handling some challenging scenes. Therefore, in this paper, we propose a novel RGB-D saliency model, namely Dynamic Selective Network (DSNet), to perform salient object detection (SOD) in RGB-D images by taking full advantage of the complementarity between the two modalities. Specifically, we first deploy a cross-modal global context module (CGCM) to acquire the high-level semantic information, which can be used to roughly locate salient objects. Then, we design a dynamic selective module (DSM) to dynamically mine the cross-modal complementary information between RGB images and depth maps, and to further optimize the multi-level and multi-scale information by executing the gated and pooling based selection, respectively. JAK inhibitors in development Moreover, we conduct the boundary refinement to obtain high-quality saliency maps with clear boundary details. Extensive experiments on eight public RGB-D datasets show that the proposed DSNet achieves a competitive and excellent performance against the current 17 state-of-the-art RGB-D SOD models.A group leader decided that his lab would share the fluorescent dyes they create, for free and without authorship requirements. Nearly 12,000 aliquots later, he reveals what has happened since.
Read More: https://www.selleckchem.com/JAK.html
     
 
what is notes.io
 

Notes.io is a web-based application for taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000 notes created and continuing...

With notes.io;

  • * You can take a note from anywhere and any device with internet connection.
  • * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
  • * You can quickly share your contents without website, blog and e-mail.
  • * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
  • * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.

Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.

Easy: Notes.io doesn’t require installation. Just write and share note!

Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )

Free: Notes.io works for 12 years and has been free since the day it was started.


You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;


Email: [email protected]

Twitter: http://twitter.com/notesio

Instagram: http://instagram.com/notes.io

Facebook: http://facebook.com/notesio



Regards;
Notes.io Team

     
 
Shortened Note Link
 
 
Looding Image
 
     
 
Long File
 
 

For written notes was greater than 18KB Unable to shorten.

To be smaller than 18KB, please organize your notes, or sign in.