Notes![what is notes.io? What is notes.io?](/theme/images/whatisnotesio.png)
![]() ![]() Notes - notes.io |
We propose an automated method for the segmentation of lumen intima layer of the common carotid artery in longitudinal mode ultrasound images. The method is hybrid, in the sense that a coarse segmentation is first achieved by optimizing a locally defined contrast function of an active oblong considering its five degrees-of-freedom, and subsequently the fine segmentation and delineation of the carotid artery are achieved by post-processing the portion of the ultrasound image spanned by the annulus region of the optimally fitted active oblong. The post-processing includes median filtering and Canny edge detection to retain the lumen intima representative points followed by a smooth curve fitting technique to delineate the lumen intima boundary. The algorithm has been validated on 84 longitudinal mode carotid artery ultrasound images provided by the Signal Processing laboratory, Brno university. The proposed technique results in an average accuracy and Dice similarity index of 98.9% and 95.2%, respectively.Super-resolution ultrasound imaging (SR-US) has enabled a tenfold improvement in resolution of the microvasculature with clinical application in many disease processes such as cancer, diabetes and cardiovascular disease. Plane wave ultrasound (US) platforms in turn are capable of the very high frame rates needed to track microbubble (MB) contrast agents used in SR-US. Both B-mode US imaging and contrast enhanced US imaging (CEUS) have been effectively used in SR-US, with B-mode US having higher signal-to-noise ratio (SNR) and CEUS providing higher contrast-to-tissue ratio (CTR). Lengthy imaging time needed for SR-US to allow perfusion and MB detection is an impediment to clinical adoption. Both SNR and CTR improvements can enhance SR-US imaging by enhancing the detection of MBs thus reducing imaging time. This study simultaneously evaluated nonlinear contrast pulse sequences (CPS) employing different amplitude modulation (AM) and pulse inversion (PI) nonlinear CEUS imaging techniques as well as combinations of the two, (AMPI) with B-mode US imaging. The objective was to improve the detection rate of MB during SR-US. Imaging was performed in vitro and in vivo in the rat hind limb using a Vantage 256 research scanner (Verasonics Inc.). Comparisons of four CPS compositions with B-mode US imaging was made based on the number of MB detected and localized in SR-US images. The use of a PI nonlinear CEUS imaging strategy improved SR-US imaging by increasing the number of MB detected in a sequence of frames by an average of 28.3% and up to 52.6% over a B-mode US imaging strategy, which would decrease imaging time accordingly.Automatic and accurate segmentation of medical images is an important task due to the direct impact of this procedure on both disease diagnosis and treatment. Segmentation of ultrasound (US) imaging is particularly challenging due to the presence of speckle noise. Recent deep learning approaches have demonstrated remarkable findings in image segmentation tasks, including segmentation of US images. However, many of the newly proposed structures are either task specific and suffer from poor generalization, or are computationally expensive. In this paper, we show that the receptive field plays a more significant role in the network's performance compared to the network's depth or the number of parameters. We further show that by controlling the size of the receptive field, a deep network can instead be replaced by a shallow network.The purpose of this study was to develop an automatic method for the segmentation of muscle cross-sectional area on transverse B-mode ultrasound images of gastrocnemius medialis using a convolutional neural network(CNN). In the provided dataset images with both normal and increased echogenicity are present. The manually annotated dataset consisted of 591 images, from 200 subjects, 400 relative to subjects with normal echogenicity and 191 to subjects with augmented echogenicity. From the DICOM files, the image has been extracted and processed using the CNN, then the output has been post-processed to obtain a finer segmentation. Final results have been compared to the manual segmentations. Precision and Recall scores as mean ± standard deviation for training, validation, and test sets are 0.96 ± 0.05, 0.90 ± 0.18, 0.89 ± 0.15 and 0.97 ±0.03, 0.89± 0.17, 0.90 ± 0.14 respectively. buy Olitigaltin The CNN approach has also been compared to another automatic algorithm, showing better performances. The proposed automatic method provides an accurate estimation of muscle cross-sectional area in muscles with different echogenicity levels.Quantification of ovarian and follicular volume and follicle count are performed in clinical practice for diagnosis and management in assisted reproduction. Ovarian volume and Antral Follicle Count (AFC) are typically tracked over the ovulation cycle. Volumetric analysis of ovary and follicle is manual and largely operator dependent. In this manuscript, we have proposed a deep-learning method for automatic simultaneous segmentation of ovary and follicles in 3D Transvaginal Ultrasound (TVUS), namely S-Net. The proposed loss function restricts false detection of follicles outside the ovary. Additionally, we have used multi-layered loss to provide deep supervision for training the network. S-Net is optimized for inference time and memory while utilizing 3D context in the 2D deep-learning network. 66 3D TVUS volumes (13,200 2D image slices) were acquired from 66 subjects in this Institutional Review Board (IRB) approved study. The segmentation framework provides approximately 92% and 87% average DICE overlap with the ground truth annotations for ovary and follicles, respectively. We have obtained state-of-the-art results with a detection rate of 88%, 91% and 98% for follicles of size 2-4mm, 4-12mm and >12mm.The 3D ultrasound reconstruction technology has led to a rapid development of ultrasound spine imaging in recent decades. However, the current imaging apparatus is bulky and not portable. The objective of this study is to develop a new compact and wireless system to offer the real-time visualized spine images during data acquisition. A portable and WI-FI based ultrasound scanner and a compact EM tracking system were assembled to acquire ultrasound transverse frames with location information which could be reconstructed into 3D spine image volume in real-time. The validation was implemented on the 2D coronal images of vertebra phantoms, and the in vivo data acquisition and reconstruction were demonstrated on volunteers. The result showed that the new system could provide reconstructed spine images in real time and the average errors of the reconstructed images were about 1mm (approximate to image pixel size).
Homepage: https://www.selleckchem.com/products/td139.html
![]() |
Notes is a web-based application for online taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000+ notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 14 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team