Notes
Notes - notes.io |
While deep learning methods hitherto have achieved considerable success in medical image segmentation, they are still hampered by two limitations (i) reliance on large-scale well-labeled datasets, which are difficult to curate due to the expert-driven and time-consuming nature of pixel-level annotations in clinical practices, and (ii) failure to generalize from one domain to another, especially when the target domain is a different modality with severe domain shifts. Recent unsupervised domain adaptation (UDA) techniques leverage abundant labeled source data together with unlabeled target data to reduce the domain gap, but these methods degrade significantly with limited source annotations. In this study, we address this underexplored UDA problem, investigating a challenging but valuable realistic scenario, where the source domain not only exhibits domain shift w.r.t. the target domain but also suffers from label scarcity. In this regard, we propose a novel and generic framework called "Label-Efficient Unsupervised Domain Adaptation" (LE-UDA). In LE-UDA, we construct self-ensembling consistency for knowledge transfer between both domains, as well as a self-ensembling adversarial learning module to achieve better feature alignment for UDA. To assess the effectiveness of our method, we conduct extensive experiments on two different tasks for cross-modality segmentation between MRI and CT images. Experimental results demonstrate that the proposed LE-UDA can efficiently leverage limited source labels to improve cross-domain segmentation performance, outperforming state-of-the-art UDA approaches in the literature.Registration of dynamic CT image sequences is a crucial preprocessing step for clinical evaluation of multiple physiological determinants in the heart such as global and regional myocardial perfusion. In this work, we present a deformable deep learning-based image registration method for quantitative myocardial perfusion CT examinations, which in contrast to previous approaches, takes into account some unique challenges such as low image quality with less accurate anatomical landmarks, dynamic changes of contrast agent concentration in the heart chambers and tissue, and misalignment caused by cardiac stress, respiration, and patient motion. The introduced method uses a recursive cascade network with a ventricle segmentation module, and a novel loss function that accounts for local contrast changes over time. It was trained and validated on a dataset of n = 118 patients with known or suspected coronary artery disease and/or aortic valve insufficiency. Our results demonstrate that the proposed method is capable of registering dynamic cardiac perfusion sequences by reducing local tissue displacements of the left ventricle (LV), whereas contrast changes do not affect the registration and image quality, in particular the absolute CT (HU) values of the entire CT sequence. In addition, the deep learning-based approach presented reveals a short processing time of a few seconds compared to conventional image registration methods, demonstrating its application potential for quantitative CT myocardial perfusion measurements in daily clinical routine.Deep-learning (DL) based CT image generation methods are often evaluated using RMSE and SSIM. By contrast, conventional model-based image reconstruction (MBIR) methods are often evaluated using image properties such as resolution, noise, bias. Calculating such image properties requires time consuming Monte Carlo (MC) simulations. For MBIR, linearized analysis using first order Taylor expansion has been developed to characterize noise and resolution without MC simulations. This inspired us to investigate if linearization can be applied to DL networks to enable efficient characterization of resolution and noise. We used FBPConvNet as an example DL network and performed extensive numerical evaluations, including both computer simulations and real CT data. Our results showed that network linearization works well under normal exposure settings. For such applications, linearization can characterize image noise and resolutions without running MC simulations. We provide with this work the computational tools to implement network linearization. The efficiency and ease of implementation of network linearization can hopefully popularize the physics-related image quality measures for DL applications. Our methodology is general; it allows flexible compositions of DL nonlinear modules and linear operators such as filtered-backprojection (FBP). For the latter, we develop a generic method for computing the covariance images that is needed for network linearization.Automatic segmentation and differentiation of retinal arteriole and venule (AV), defined as small blood vessels directly before and after the capillary plexus, are of great importance for the diagnosis of various eye diseases and systemic diseases, such as diabetic retinopathy, hypertension, and cardiovascular diseases. Optical coherence tomography angiography (OCTA) is a recent imaging modality that provides capillary-level blood flow information. However, OCTA does not have the colorimetric and geometric differences between AV as the fundus photography does. Various methods have been proposed to differentiate AV in OCTA, which typically needs the guidance of other imaging modalities. In this study, we propose a cascaded neural network to automatically segment and differentiate AV solely based on OCTA. A convolutional neural network (CNN) module is first applied to generate an initial segmentation, followed by a graph neural network (GNN) to improve the connectivity of the initial segmentation. Various CNN and GNN architectures are employed and compared. The proposed method is evaluated on multi-center clinical datasets, including 3×3 mm2 and 6×6 mm2 OCTA. The proposed method holds the potential to enrich OCTA image information for the diagnosis of various diseases.Modelling real-world time series can be challenging in the absence of sufficient data. Limited data in healthcare, can arise for several reasons, namely when the number of subjects is insufficient or the observed time series is irregularly sampled at a very low sampling frequency. This is especially true when attempting to develop personalised models, as there are typically few data points available for training from an individual subject. Furthermore, the need for early prediction (as is often the case in healthcare applications) amplifies the problem of limited availability of data. This article proposes a novel personalised technique that can be learned in the absence of sufficient data for early prediction in time series. Our novelty lies in the development of a subset selection approach to select time series that share temporal similarities with the time series of interest, commonly known as the test time series. Then, a Gaussian processes-based model is learned using the existing test data and the chosen subset to produce personalised predictions for the test subject. find more We will conduct experiments with univariate and multivariate data from real-world healthcare applications to show that our strategy outperforms the state-of-the-art by around 20%.Inspired by a newly discovered gene regulation mechanism known as competing endogenous RNA (ceRNA) interactions, several computational methods have been proposed to generate ceRNA networks. However, most of these methods have focused on deriving restricted types of ceRNA interactions such as lncRNA-miRNA-mRNA interactions. Competition for miRNA-binding occurs not only between lncRNAs and mRNAs but also between lncRNAs or between mRNAs. Furthermore, a large number of pseudogenes also act as ceRNAs, thereby regulate other genes. In this study, we developed a general method for constructing integrative networks of all possible interactions of ceRNAs in renal cell carcinoma (RCC). From the ceRNA networks we derived potential prognostic biomarkers, each of which is a triplet of two ceRNAs and miRNA (i.e., ceRNA-miRNA-ceRNA). Interestingly, some prognostic ceRNA triplets do not include mRNA at all, and consist of two non-coding RNAs and miRNA, which have been rarely known so far. Comparison of the prognostic ceRNA triplets to known prognostic genes in RCC showed that the triplets have a better predictive power of survival rates than the known prognostic genes. Our approach will help us construct integrative networks of ceRNAs of all types and find new potential prognostic biomarkers in cancer.We present ASH, a modern and high-performance framework for parallel spatial hashing on GPU. Compared to existing GPU hash map implementations, ASH achieves higher performance, supports richer functionality, and requires fewer lines of code (LoC) when used for implementing spatially varying operations from volumetric geometry reconstruction to differentiable appearance reconstruction. Unlike existing GPU hash maps, the ASH framework provides a versatile tensor interface, hiding low-level details from the users. In addition, by decoupling the internal hashing data structures and key-value data in buffers, we offer direct access to spatially varying data via indices, enabling seamless integration to modern libraries such as PyTorch. To achieve this, we 1) detach stored key-value data from the low-level hash map implementation; 2) bridge the pointer-first low level data structures to index-first high-level tensor interfaces via an index heap; 3) adapt both generic and non-generic integer-only hash map implementations as backends to operate on multi-dimensional keys. We first profile our hash map against state-of-the-art hash maps on synthetic data to show the performance gain from this architecture. We then show that ASH can consistently achieve higher performance on various large-scale 3D perception tasks with fewer LoC by showcasing several applications, including 1) point cloud voxelization, 2) retargetable volumetric scene reconstruction, 3) non-rigid point cloud registration and volumetric deformation, and 4) spatially varying geometry and appearance refinement. ASH and its example applications are open sourced in Open3D (http//www.open3d.org).Most value function learning algorithms in reinforcement learning are based on the mean squared (projected) Bellman error. However, squared errors are known to be sensitive to outliers, both skewing the solution of the objective and resulting in high-magnitude and high-variance gradients. To control these high-magnitude updates, typical strategies in RL involve clipping gradients, clipping rewards, rescaling rewards, or clipping errors. While these strategies appear to be related to robust losses-like the Huber loss-they are built on semi-gradient update rules which do not minimize a known loss. In this work, we build on recent insights reformulating squared Bellman errors as a saddlepoint optimization problem and propose a saddlepoint reformulation for a Huber Bellman error and Absolute Bellman error. We start from a formalization of robust losses, then derive sound gradient-based approaches to minimize these losses in both the online off-policy prediction and control settings. We characterize the solutions of the robust losses, providing insight into the problem settings where the robust losses define notably better solutions than the mean squared Bellman error. Finally, we show that the resulting gradient-based algorithms are more stable, for both prediction and control, with less sensitivity to meta-parameters.
My Website: https://www.selleckchem.com/products/azd9291.html
|
Notes.io is a web-based application for taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000 notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 12 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team