DESARROLLO DEL SOFTWARE PARA UN SISTEMA PET DE CRISTAL CONTINUO APLICADO AL CANCER DE MAMA
PID2019-107790RB-C22
•
Nombre agencia financiadora Agencia Estatal de Investigación
Acrónimo agencia financiadora AEI
Programa Programa Estatal de Generación de Conocimiento y Fortalecimiento Científico y Tecnológico del Sistema de I+D+i
Subprograma Subprograma Estatal de Generación de Conocimiento
Convocatoria Proyectos I+D
Año convocatoria 2019
Unidad de gestión Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020
Centro beneficiario UNIVERSITAT POLITÈCNICA DE VALÈNCIA
Identificador persistente http://dx.doi.org/10.13039/501100011033
Publicaciones
Resultados totales (Incluyendo duplicados): 15Encontrada(s) 1 página(s)
Ultrasound breast images denoising using generative adversarial networks (GANs)
Digital.CSIC. Repositorio Institucional del CSIC
- Jiménez-Gaona, Yuliana
- Rodríguez-Álvarez, María José
- Escudero, Líder
- Sandoval, Carlos
- Lakshminarayanan, Vasudevan
The data that support the findings of this study are openly available in the Mendeley repository (https:// data.mendeley.com/drafts/g3cmj46xyx), INTRODUCTION:
Ultrasound in conjunction with mammography imaging, plays a vital role in the early detection and diagnosis of breast cancer. However, speckle noise affects medical ultrasound images and degrades visual radiological interpretation. Speckle carries information about the interactions of the ultrasound pulse with the tissue microstructure, which generally causes several difficulties in identifying malignant and benign regions. The application of deep learning in image denoising has gained more attention in recent years., OBJECTIVES:
The main objective of this work is to reduce speckle noise while preserving features and details in breast ultrasound images using GAN models., METHODS:
We proposed two GANs models (Conditional GAN and Wasserstein GAN) for speckle-denoising public breast ultrasound databases: BUSI, DATASET A, AND UDIAT (DATASET B). The Conditional GAN model was trained using the Unet architecture, and the WGAN model was trained using the Resnet architecture. The image quality results in both algorithms were measured by Peak Signal to Noise Ratio (PSNR, 35–40 dB) and Structural Similarity Index (SSIM, 0.90–0.95) standard values., RESULTS:
The experimental analysis clearly shows that the Conditional GAN model achieves better breast ultrasound despeckling performance over the datasets in terms of PSNR= 38.18 dB and SSIM= 0.96 with respect to the WGAN model (PSNR=33.0068 dB and SSIM= 0.91) on the small ultrasound training datasets., CONCLUSIONS:
The observed performance differences between CGAN and WGAN will help to better implement new tasks in a computer-aided detection/diagnosis (CAD) system. In future work, these data can be used as CAD input training for image classification, reducing overfitting and improving the performance and accuracy of deep convolutional algorithms., This project has been co-financed by the Spanish Government Grant Deepbreast PID2019-107790RB-C22 funded by MCIN/AEI/10.13039/501100011033., Peer reviewed
Ultrasound in conjunction with mammography imaging, plays a vital role in the early detection and diagnosis of breast cancer. However, speckle noise affects medical ultrasound images and degrades visual radiological interpretation. Speckle carries information about the interactions of the ultrasound pulse with the tissue microstructure, which generally causes several difficulties in identifying malignant and benign regions. The application of deep learning in image denoising has gained more attention in recent years., OBJECTIVES:
The main objective of this work is to reduce speckle noise while preserving features and details in breast ultrasound images using GAN models., METHODS:
We proposed two GANs models (Conditional GAN and Wasserstein GAN) for speckle-denoising public breast ultrasound databases: BUSI, DATASET A, AND UDIAT (DATASET B). The Conditional GAN model was trained using the Unet architecture, and the WGAN model was trained using the Resnet architecture. The image quality results in both algorithms were measured by Peak Signal to Noise Ratio (PSNR, 35–40 dB) and Structural Similarity Index (SSIM, 0.90–0.95) standard values., RESULTS:
The experimental analysis clearly shows that the Conditional GAN model achieves better breast ultrasound despeckling performance over the datasets in terms of PSNR= 38.18 dB and SSIM= 0.96 with respect to the WGAN model (PSNR=33.0068 dB and SSIM= 0.91) on the small ultrasound training datasets., CONCLUSIONS:
The observed performance differences between CGAN and WGAN will help to better implement new tasks in a computer-aided detection/diagnosis (CAD) system. In future work, these data can be used as CAD input training for image classification, reducing overfitting and improving the performance and accuracy of deep convolutional algorithms., This project has been co-financed by the Spanish Government Grant Deepbreast PID2019-107790RB-C22 funded by MCIN/AEI/10.13039/501100011033., Peer reviewed
Ultrasound Breast images denoising using Generative Adversarial Networks (GANs) [Dataset]
Digital.CSIC. Repositorio Institucional del CSIC
- Jiménez-Gaona, Yuliana
- Rodríguez-Álvarez, María José
- Escudero, Líder
- Sandoval, Carlos
- Lakshminarayanan, Vasudevan
Ultrasound imaging plays an important role in screening early detection and breast cancer diagnosis. However, speckle noise affects medical ultrasound images and degrades the visual radiologic evaluation, which generally causes several difficulties in identifying the malignant and benignant regions. Denoising is an important step in preprocessing medical images, because it restores the maximum details preserving edges and all information of the images, achieving successful accuracy in anomalies classification. To reduce the speckle noise and retain image features well, we proposed two GANs models to breast ultrasound speckle denoising as preprocessing image, (i) Conditional GAN and (ii) WGAN. The better denoising image quality was measured by peak signal to noise ratio (PSNR) and structural similarity index (SSIM). The experimental analysis clearly shows that the CGAN method achieves better visual image quality in terms of PSNR=38.18 dB and SSIM= 0.96 with respect to WGAN model (PSNR=33.0068 dB and SSIM=0.91) on the small Ultrasound training datasets. Thus, we conclude that GANs can help in denoising ultrasound medical imaging, and as a future work these data can be used as computer system input for image segmentation and classification, reducing the hand-dependence and helping radiologists to improve breast cancer detection., Ministerio de Educación y Formación Profesional
PID2019-107790RB-C22 MCIN/AEI/10.13039/501100011033., Peer reviewed
PID2019-107790RB-C22 MCIN/AEI/10.13039/501100011033., Peer reviewed
Ultrasound breast images denoising using generative adversarial networks (GANs)
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Jimenez-Gaona, Yuliana
- Escudero, Lider
- Sandoval, Carlos
- Lakshminarayanan, Vasudevan
- Rodríguez-Álvarez, M.J.
[EN] Ultrasound in conjunction with mammography imaging, plays a vital role in the early detection and diagnosis of breast cancer. However, speckle noise affects medical ultrasound images and degrades visual radiological interpretation.
Speckle carries information about the interactions of the ultrasound pulse with the tissue microstructure, which generally causes
several difficulties in identifying malignant and benign regions. The application of deep learning in image denoising has gained
more attention in recent years. The main objective of this work is to reduce speckle noise while preserving features and details in breast
ultrasound images using GAN models.
We proposed two GANs models (Conditional GAN and Wasserstein GAN) for speckle-denoising public breast
ultrasound databases: BUSI, DATASET A, AND UDIAT (DATASET B). The Conditional GAN model was trained using the
Unet architecture, and the WGAN model was trained using the Resnet architecture. The image quality results in both algorithms
were measured by Peak Signal to Noise Ratio (PSNR, 35¿40 dB) and Structural Similarity Index (SSIM, 0.90¿0.95) standard
values.
The experimental analysis clearly shows that the Conditional GAN model achieves better breast ultrasound despeckling performance over the datasets in terms of PSNR = 38.18 dB and SSIM = 0.96 with respect to the WGAN model
(PSNR = 33.0068 dB and SSIM = 0.91) on the small ultrasound training datasets.
CONCLUSIONS: The observed performance differences between CGAN and WGAN will help to better implement new tasks
in a computer-aided detection/diagnosis (CAD) system. In future work, these data can be used as CAD input training for image
classification, reducing overfitting and improving the performance and accuracy of deep convolutional algorithms., This project has been co-financed by the Spanish Government Grant Deepbreast PID2019-107790RBC22 funded by MCIN/AEI/10.13039/501100011033.
Speckle carries information about the interactions of the ultrasound pulse with the tissue microstructure, which generally causes
several difficulties in identifying malignant and benign regions. The application of deep learning in image denoising has gained
more attention in recent years. The main objective of this work is to reduce speckle noise while preserving features and details in breast
ultrasound images using GAN models.
We proposed two GANs models (Conditional GAN and Wasserstein GAN) for speckle-denoising public breast
ultrasound databases: BUSI, DATASET A, AND UDIAT (DATASET B). The Conditional GAN model was trained using the
Unet architecture, and the WGAN model was trained using the Resnet architecture. The image quality results in both algorithms
were measured by Peak Signal to Noise Ratio (PSNR, 35¿40 dB) and Structural Similarity Index (SSIM, 0.90¿0.95) standard
values.
The experimental analysis clearly shows that the Conditional GAN model achieves better breast ultrasound despeckling performance over the datasets in terms of PSNR = 38.18 dB and SSIM = 0.96 with respect to the WGAN model
(PSNR = 33.0068 dB and SSIM = 0.91) on the small ultrasound training datasets.
CONCLUSIONS: The observed performance differences between CGAN and WGAN will help to better implement new tasks
in a computer-aided detection/diagnosis (CAD) system. In future work, these data can be used as CAD input training for image
classification, reducing overfitting and improving the performance and accuracy of deep convolutional algorithms., This project has been co-financed by the Spanish Government Grant Deepbreast PID2019-107790RBC22 funded by MCIN/AEI/10.13039/501100011033.
BRANET: A mobil application for breast image classification based on deep learning algorithms
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Jimenez-Gaona, Yuliana
- Castillo-Malla, Darwin Patricio
- García, Santiago
- Carrión, Diana
- Corral, Patricio
- Lakshminarayanan, Vasudevan
- Rodríguez-Álvarez, M.J.
[EN] Mobile health apps are widely used for breast cancer detection using artifcial intelligence algorithms, providing radiologists
with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named "BraNet"
for 2D breast imaging segmentation and classifcation using deep learning algorithms. During the phase of-line, an SNGAN
model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM
and ResNet18 segmentation and classifcation models. During phase online, the BraNet app was developed using the react
native framework, ofering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging
classifcation. This application operates on a client-server architecture and was implemented in Python for iOS and Android
devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived
breast tissue type. The reader's agreement was assessed using the kappa coefcient. The BraNet App Mobil exhibited
the highest accuracy in benign and malignant US images (94.7%/93.6%) classifcation compared to DM during training I
(80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts¿ accuracy, with DM classifcation being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classifcation
than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US
images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is
vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are
present (microcalcifcations, nodules, mass, asymmetry, and dense breasts) and can afect the API accuracy model., Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Funding was obtained from the Universidad Técnica Particular de Loja, PROY_INV_QU_2022_3576. CRUE UNIVERSITAT POLITÈCNICA DE VALÈNCIA.
with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named "BraNet"
for 2D breast imaging segmentation and classifcation using deep learning algorithms. During the phase of-line, an SNGAN
model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM
and ResNet18 segmentation and classifcation models. During phase online, the BraNet app was developed using the react
native framework, ofering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging
classifcation. This application operates on a client-server architecture and was implemented in Python for iOS and Android
devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived
breast tissue type. The reader's agreement was assessed using the kappa coefcient. The BraNet App Mobil exhibited
the highest accuracy in benign and malignant US images (94.7%/93.6%) classifcation compared to DM during training I
(80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts¿ accuracy, with DM classifcation being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classifcation
than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US
images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is
vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are
present (microcalcifcations, nodules, mass, asymmetry, and dense breasts) and can afect the API accuracy model., Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Funding was obtained from the Universidad Técnica Particular de Loja, PROY_INV_QU_2022_3576. CRUE UNIVERSITAT POLITÈCNICA DE VALÈNCIA.
Design, Convergence and Stability of a Fourth-Order Class of Iterative Methods for Solving Nonlinear Vectorial Problems
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Cordero Barbero, Alicia
- Jordan-Lluch, Cristina
- Sanabria-Codesal, Esther
- Torregrosa Sánchez, Juan Ramón
[EN] A new parametric family of iterative schemes for solving nonlinear systems is presented. Fourth-order convergence is demonstrated and its stability is analyzed as a function of the parameter values. This study allows us to detect the most stable elements of the class, to find the fractals in the boundary of the basins of attraction and to reject those with chaotic behavior. Some numerical tests show the performance of the new methods, confirm the theoretical results and allow to compare the proposed schemes with other known ones, This research was supported by PGC2018-095896-B-C22, PID2019-107790RB-C22 and PGC2018-094889-B-I00 (MCIU/AEI/FEDER, UE).
Artificial Intelligence on FDG PET Images Identifies Mild Cognitive Impairment Patients with Neurodegenerative Disease
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Prats-Climent, Joan
- Rodríguez-Álvarez, M.J.
- Gandia-Ferrero, Maria Teresa
- Torres-Espallardo, Irene
- Álvarez-Sanchez, Lourdes
- Martinez-Sanchis, Begoña
- Cháfer-Pericás, Consuelo
- Gómez-Rico, Ignacio
- Cerdá-Alberich, Leonor
- Aparici-Robles, Fernando
- Baquero-Toledo, Miquel
- Marti-Bonmati, Luis
[EN] The purpose of this project is to develop and validate a Deep Learning (DL) FDG PET imaging algorithm able to identify patients with any neurodegenerative diseases (Alzheimer's Disease (AD), Frontotemporal Degeneration (FTD) or Dementia with Lewy Bodies (DLB)) among patients with Mild Cognitive Impairment (MCI). A 3D Convolutional neural network was trained using images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The ADNI dataset used for the model training and testing consisted of 822 subjects (472 AD and 350 MCI). The validation was performed on an independent dataset from La Fe University and Polytechnic Hospital. This dataset contained 90 subjects with MCI, 71 of them developed a neurodegenerative disease (64 AD, 4 FTD and 3 DLB) while 19 did not associate any neurodegenerative disease. The model had 79% accuracy, 88% sensitivity and 71% specificity in the identification of patients with neurodegenerative diseases tested on the 10% ADNI dataset, achieving an area under the receiver operating characteristic curve (AUC) of 0.90. On the external validation, the model preserved 80% balanced accuracy, 75% sensitivity, 84% specificity and 0.86 AUC. This binary classifier model based on FDG PET images allows the early prediction of neurodegenerative diseases in MCI patients in standard clinical settings with an overall 80% classification balanced accuracy., This work was financially supported by INBIO 2019 (DEEPBRAIN), INNVA1/2020/83(DEEPPET) funded by Generalitat Valenciana, and PID2019-107790RB-C22 funded by MCIN/AEI/10.13039/501100011033/. Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer's Association; Alzheimer's Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org).The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.
2D study of a joint reconstruction algorithm for limited angle PET geometries
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Vergara, Marina
- Rezaei, Ahmadreza
- Nuyts, Johan
- Rodríguez-Álvarez, M.J.
- Benlloch Baviera, Jose María
[EN] Recently, a wide interest on organ-dedicated PET systems
has been shown. Some of those systems present geometries that produce an incomplete sampling of the tomographic data due to limited
angular coverage and/or truncation, which lead to artifacts on the
reconstructed image. Moreover, they are often designed as standalone systems, which implies the absence of anatomical information
to estimate the attenuation factors. In this work, we propose a joint
reconstruction algorithm for estimating the activity and the attenuation
factors on a limited angle PET system with time-of-flight capabilities. This algorithm is based on MLACF and uses literature linear
attenuation coefficients in a known tissue-class region to obtain an
absolute quantification. We evaluate the algorithm through simple 2D
simulations for different TOF resolutions and angular coverage. The
results show that with good TOF resolution quantitative PET imaging
can be achieved even with aggressive angular limitation.
has been shown. Some of those systems present geometries that produce an incomplete sampling of the tomographic data due to limited
angular coverage and/or truncation, which lead to artifacts on the
reconstructed image. Moreover, they are often designed as standalone systems, which implies the absence of anatomical information
to estimate the attenuation factors. In this work, we propose a joint
reconstruction algorithm for estimating the activity and the attenuation
factors on a limited angle PET system with time-of-flight capabilities. This algorithm is based on MLACF and uses literature linear
attenuation coefficients in a known tissue-class region to obtain an
absolute quantification. We evaluate the algorithm through simple 2D
simulations for different TOF resolutions and angular coverage. The
results show that with good TOF resolution quantitative PET imaging
can be achieved even with aggressive angular limitation.
Simulation Study of a Frame-Based Motion Correction Algorithm for Positron Emission Imaging
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Espinós-Morató, Héctor
- Cascales-Picó, David
- Vergara, Marina
- Hernández-Martínez, Ángel
- Benlloch Baviera, Jose María
- Rodríguez-Álvarez, M.J.
[EN] Positron emission tomography (PET) is a functional non-invasive imaging modality that uses radioactive substances (radiotracers) to measure changes in metabolic processes. Advances in scanner technology and data acquisition in the last decade have led to the development of more sophisticated PET devices with good spatial resolution (1¿3 mm of full width at half maximum (FWHM)). However, there are involuntary motions produced by the patient inside the scanner that lead to image degradation and potentially to a misdiagnosis. The adverse effect of the motion in the reconstructed image increases as the spatial resolution of the current scanners continues improving. In order to correct this effect, motion correction techniques are becoming increasingly popular and further studied. This work presents a simulation study of an image motion correction using a frame-based algorithm. The method is able to cut the acquired data from the scanner in frames, taking into account the size of the object of study. This approach allows working with low statistical information without losing image quality. The frames are later registered using spatio-temporal registration developed in a multi-level way. To validate these results, several performance tests are applied to a set of simulated moving phantoms. The results obtained show that the method minimizes the intra-frame motion, improves the signal intensity over the background in comparison with other literature methods, produces excellent values of similarity with the ground-truth (static) image and is able to find a limit in the patient-injected dose when some prior knowledge of the lesion is present., This research has been co-financed by the Spanish Government Grants TEC2016-79884-C2, PID2019-107790RB-C22, and PEJ2018-002230-A-AR; the Generalitat Valenciana GJIDI/2018/A/040l and the PTA2019-017113-1/AEI/10.13039/501100011033; the European Union through the European Regional Development Fund (ERDF); and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 695536).
Breast mass regions classification from mammograms using convolutional neuralnetworks and transfer learning
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Jimenez-Gaona, Yuliana
- Carrión, Diana
- Castillo-Malla, Darwin
- Lakshminarayanan, Vasudevan
- Rodríguez-Álvarez, M.J.
[EN] This study introduces a novel approach aimed at enhancing the quality of digital mammography images through pre-processing techniques, to improve breast cancer detection accuracy. The
primary objective is to enhance image resolution, thus leading to more precise breast tissue segmentation and subsequent classification utilizing convolutional neural networks (CNNs). Three
recognized public mammography databases: CBIS-DDSM, Mini-MIAS, and Inbreast were used as
pre-processing data. Our statistical findings revealed that the EDSR method (PSNR = 39.05 dB/
SSIM = 0.90) consistently outperformed the visual quality of images when compared to SR-RDN
(PSNR = 32.68 dB/SSIM = 0.82). Similarly, UNet demonstrated superior performance over SegNet,
boasting an average Intersection over Union (IoU) of 0.862, an average Dice coefficient of 0.991, and
an accuracy rate of 0.947 in Region of Interest (RoI) segmentation results. In conclusion, the ResNet
model contributed to enhanced accuracy compared to conventional machine learning algorithms.
However, it did not surpass state-of-the-art deep CNN-based classifiers, achieving an accuracy rate
of 75%., This project has received co-financed from the Spanish Government Grant PID2019-107790RB-C22, Software development for a continuous PET crystal systems applied to breast cancer .
primary objective is to enhance image resolution, thus leading to more precise breast tissue segmentation and subsequent classification utilizing convolutional neural networks (CNNs). Three
recognized public mammography databases: CBIS-DDSM, Mini-MIAS, and Inbreast were used as
pre-processing data. Our statistical findings revealed that the EDSR method (PSNR = 39.05 dB/
SSIM = 0.90) consistently outperformed the visual quality of images when compared to SR-RDN
(PSNR = 32.68 dB/SSIM = 0.82). Similarly, UNet demonstrated superior performance over SegNet,
boasting an average Intersection over Union (IoU) of 0.862, an average Dice coefficient of 0.991, and
an accuracy rate of 0.947 in Region of Interest (RoI) segmentation results. In conclusion, the ResNet
model contributed to enhanced accuracy compared to conventional machine learning algorithms.
However, it did not surpass state-of-the-art deep CNN-based classifiers, achieving an accuracy rate
of 75%., This project has received co-financed from the Spanish Government Grant PID2019-107790RB-C22, Software development for a continuous PET crystal systems applied to breast cancer .
MR Images, Brain Lesions, and Deep Learning
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Castillo, Darwin
- Lakshminarayanan, Vasudevan
- Rodríguez-Álvarez, M.J.
[EN] Medical brain image analysis is a necessary step in computer-assisted/computer-aided diagnosis (CAD) systems. Advancements in both hardware and software in the past few years have led to improved segmentation and classification of various diseases. In the present work, we review the published literature on systems and algorithms that allow for classification, identification, and detection of white matter hyperintensities (WMHs) of brain magnetic resonance (MR) images, specifically in cases of ischemic stroke and demyelinating diseases. For the selection criteria, we used bibliometric networks. Of a total of 140 documents, we selected 38 articles that deal with the main objectives of this study. Based on the analysis and discussion of the revised documents, there is constant growth in the research and development of new deep learning models to achieve the highest accuracy and reliability of the segmentation of ischemic and demyelinating lesions. Models with good performance metrics (e.g., Dice similarity coefficient, DSC: 0.99) were found; however, there is little practical application due to the use of small datasets and a lack of reproducibility. Therefore, the main conclusion is that there should be multidisciplinary research groups to overcome the gap between CAD developments and their deployment in the clinical environment, This project was co-financed by the Spanish Government (grant PID2019-107790RB-C22), "Software Development for a Continuous PET Crystal System Applied to Breast Cancer"
2D feasibility study of joint reconstruction of attenuation and activity in limited angle TOF-PET
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Vergara, Marina
- Rezaei, Ahmadreza
- Schramm, Georg
- Nuyts, Johan
- Rodríguez-Álvarez, M.J.
- Benlloch Baviera, Jose María
[EN] Several research groups are studying organ-dedicated limited angle positron emission tomography (PET) systems to optimize performance-cost ratio, sensitivity, access to the patient, and/or flexibility. Often open systems are considered, typically consisting of two detector panels of various sizes. Such systems provide incomplete sampling due to limited angular coverage and/or truncation, which leads to artifacts in the reconstructed activity images. In addition, these organ-dedicated PET systems are usually stand-alone systems, and as a result, no attenuation information can be obtained from anatomical images acquired in the same imaging session. It has been shown that the use of time-of-flight (TOF) information reduces incomplete data artifacts and enables the joint estimation of the activity and the attenuation factors. In this work, we explore with simple 2-D simulations the performance and stability of a joint reconstruction algorithm, for imaging with a limited angle PET system. The reconstruction is based on the so-called maximum-likelihood attenuation correction factors (MLACF) algorithm and uses linear attenuation coefficients in a known-tissue-class region to obtain absolute quantification. Different panel sizes and different TOF resolutions are considered. The noise propagation is compared to that of MLEM reconstruction with exact attenuation correction (AC) for the same PET system. The results show that with good TOF resolution, images of good visual quality can be obtained. If also a good scatter correction can be implemented, quantitative PET imaging will be possible. Further research, in particular on scatter correction, is required., This work was supported in part by the European Research Council (ERC) through the European Unions Horizon 2020 Research and Innovation Program under Grant 695536; in part by the Research Foundation Flanders (FWO) under Project 12T7118N and Project G062220N; and in part by the NIH Project under Grant 1P41EB017183-01A1.
Spectral Reflectance Reconstruction Using Fuzzy Logic System Training: Color Science Application
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Amiri, Morteza Maali
- Fairchild, Mark D.
- Garcia-Nieto, Sergio
- Morillas, Samuel
[EN] In this work, we address the problem of spectral reflectance recovery from both CIEXYZ and RGB values by means of a machine learning approach within the fuzzy logic framework, which constitutes the first application of fuzzy logic in these tasks. We train a fuzzy logic inference system using the Macbeth ColorChecker DC and we test its performance with a 130 sample target set made out of Artist's paints. As a result, we obtain a fuzzy logic inference system (FIS) that performs quite accurately. We have studied different parameter settings within the training to achieve a meaningful overfitting-free system. We compare the system performance against previous successful methods and we observe that both spectrally and colorimetrically our approach substantially outperforms these classical methods. In addition, from the FIS trained we extract the fuzzy rules that the system has learned, which provide insightful information about how the RGB/XYZ inputs are related to the outputs. That is to say that, once the system is trained, we extract the codified knowledge used to relate inputs and outputs. Thus, we are able to assign a physical and/or conceptual meaning to its performance that allows not only to understand the procedure applied by the system but also to acquire insight that in turn might lead to further improvements. In particular, we find that both trained systems use four reference spectral curves, with some similarities, that are combined in a non-linear way to predict spectral curves for other inputs. Notice that the possibility of being able to understand the method applied in the trained system is an interesting difference with respect to other 'black box' machine learning approaches such as the currently fashionable convolutional neural networks in which the downside is the impossibility to understand their ways of procedure. Another contribution of this work is to serve as an example of how, through the construction of a FIS, some knowledge relating inputs and outputs in ground truth datasets can be extracted so that an analogous strategy could be followed for other problems in color and spectral science., Samuel Morillas acknowledges the support of the Spanish Ministry of Science under grants PRX17/00384, PRX16/00050 and PID2019-107790RB-C22.
Fast Energy Dependent Scatter Correction for List-Mode PET Data
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Álvarez-Gómez, Juan Manuel
- Santos-Blasco, Joaquín
- Moliner Martínez, Laura
- Rodríguez-Álvarez, M.J.
[EN] Improvements in energy resolution of modern positron emission tomography (PET) detectors have created opportunities to implement energy-based scatter correction algorithms. Here, we use the energy information of auxiliary windows to estimate the scatter component. Our method is directly implemented in an iterative reconstruction algorithm, generating a scatter-corrected image without the need for sinograms. The purpose was to implement a fast energy-based scatter correction method on list-mode PET data, when it was not possible to use an attenuation map as a practical approach for the scatter degradation. The proposed method was evaluated using Monte Carlo simulations of various digital phantoms. It accurately estimated the scatter fraction distribution, and improved the image contrast in the simulated studied cases. We conclude that the proposed scatter correction method could effectively correct the scattered events, including multiple scatters and those originated in sources outside the field of view., This work was supported, in part, by a Spanish Government grant PID2019-107790RB-C22, Ayuda Predoctoral ACIF/2018/105, Generalitat Valenciana.
Deep-Learning-Based Computer- Aided Systems for Breast Cancer Imaging: A Critical Review
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Jiménez-Gaona, Yuliana
- Lakshminarayanan, Vasudevan
- Rodríguez-Álvarez, M.J.
[EN] This paper provides a critical review of the literature on deep learning applications in breast tumor diagnosis using ultrasound and mammography images. It also summarizes recent advances in computer-aided diagnosis/detection (CAD) systems, which make use of new deep learning methods to automatically recognize breast images and improve the accuracy of diagnoses made by radiologists. This review is based upon published literature in the past decade (January 2010-January 2020), where we obtained around 250 research articles, and after an eligibility process, 59 articles were presented in more detail. The main findings in the classification process revealed that new DL-CAD methods are useful and effective screening tools for breast cancer, thus reducing the need for manual feature extraction. The breast tumor research community can utilize this survey as a basis for their current and future studies., This project has been co-financed by the Spanish Government Grant PID2019-107790RB-C22, "Software development for a continuous PET crystal systems applied to breast cancer".
On Principal Fuzzy Metric Spaces
RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
- Gregori Gregori, Valentín
- Miñana, Juan-José
- Morillas, Samuel
- Sapena Piera, Almanzor
[EN] In this paper, we deal with the notion of fuzzy metric space (X, M, *), or simply X, due to George and Veeramani. It is well known that such fuzzy metric spaces, in general, are not completable and also that there exist p-Cauchy sequences which are not Cauchy. We prove that if every p-Cauchy sequence in X is Cauchy, then X is principal, and we observe that the converse is false, in general. Hence, we introduce and study a stronger concept than principal, called strongly principal. Moreover, X is called weak p-complete if every p-Cauchy sequence is p-convergent. We prove that if X is strongly principal (or weak p-complete principal), then the family of p-Cauchy sequences agrees with the family of Cauchy sequences. Among other results related to completeness, we prove that every strongly principal fuzzy metric space where M is strong with respect to an integral (positive) t-norm * admits completion., Samuel Morillas acknowledges financial support from Ministerio de Ciencia e Innovacion of Spain under grant PID2019-107790RB-C22 funded by MCIN/AEI/10.13039/501100011033. JuanJose Minana acknowledges financial support from Proyecto PGC2018-095709-B-C21 financiado por MCIN/AEI/10.13039/501100011033 y FEDER "Una manera de hacer Europa" and from project BUGWRIGHT2. This last project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 871260. Also acknowledge support of Generalitat Valenciana under grant CIAICO/2021/137. This publication reflects only the authors' views and the European Union is not liable for any use that may be made of the information contained therein.