Publications

2021

L. Chiariglione, et al. (A. Artusi)

AI-based Media Coding and Beyond

MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence is
the first body developing data coding standards that have Artificial
Intelligence (AI) as its core technology. MPAI believes that universally
accessible standards for AI-based data coding can have the same positive
effects on AI as standards had on digital media.
Elementary components of MPAI standards – AI Modules (AIM) – expose
standard interfaces for operation in a standard AI Framework (AIF). As
their performance may depend on the technologies used, MPAI expects
that competing developers providing AIMs will promote horizontal markets
of AI solutions that build on and further promote AI innovation.
Finally, the MPAI Framework Licenses provide guidelines to IPR holders
facilitating the availability of compatible licenses to standard users.

Close

2021

K. Kylili, A. Artusi and C. Hadjistassou

A new paradigm for estimating the prevalence of plastic litter in the marine environment

The intelligent method proposed herein is formulated on a deep learning technique which can identify, localise and map the shape of plastic debris in the marine environment. Utilising images depicting plastic litter from six beaches in Cyprus, the developed tool pointed to a plastic litter density of 0.035 items/m2. Extrapolated to the entire shorelines of the island, the intelligent approach estimated about 66,000 plastic articles weighting a total of ≈1000 kg. Besides deducing the plastic litter density, the dimensions of all documented plastic litter were determined with the aid of the OpenCV Contours image processing tool. Results revealed that the dominant object length ranged between 10 and 30 cm which is in agreement with the length of common plastic litter often spoiling these coastlines. Concluding, only in-situ visual scan sample surveys and no manual collection means were used to predict the density and the dimensions of the plastic litter.

Close

@article{kylili2021new,
title={A new paradigm for estimating the prevalence of plastic litter in the marine environment},
author={Kylili, Kyriaki and Artusi, Alessandro and Hadjistassou, Constantinos},
journal={Marine Pollution Bulletin},
volume={173},
pages={113127},
year={2021},
publisher={Elsevier}
}

Close

2021

S. Panayi and A. Artusi

Hazing or Dehazing: the big dilemma for object detection

One of the biggest adversaries to the computer vision pipeline is bad weather which can deteriorate the visual quality of the captured images and lead to a decreased performance of tasks. Examples of such tasks can include image classification, object detection and semantic segmentation. To ameliorate this acquisition bottleneck, vision experts have developed restoration approaches which aim to recover the lost visual information due to the presence of poor climactic conditions such as atmospheric haze. The technique of single image dehazing has achieved great strides in producing aesthetically pleasing restorations to the human perception. However, it is important to establish whether these approaches bring the same merits to high-level vision tasks. To this end, we formulate a study around the task of object detection that aims to uncover the underlying relationship between this high-level task and atmospheric haze as well as examine the ability of the current dehazing process to enhance the detection performance. From our experiments we find that while there is a clear negative relationship between hazy conditions and detection performance, there is little help from the dehazing process to achieve the desired haze-free results.

Close

@inproceedings{inproceedings,
author = {Panayi, Simoni and Artusi, Alessandro},
year = {2021},
month = {10},
pages = {},
title = {Hazing or Dehazing: the big dilemma for object detection},
booktitle = {IEEE 23rd International Workshop on Multimedia and Signal Processing.}
}

Close

2021

F. Banterle, A. Artusi, A. Moreo and F. Carrara

NoR-VDPNet++: Efficient Training and Architecture for Deep No-Reference Image Quality Metrics

Efficiency and efficacy are two desirable properties of the outmost importance for any evaluation metric having to do with Standard Dynamic Range (SDR) imaging or High Dynamic Range (HDR) imaging. However, these properties are hard to achieve simultaneously. On the one side, metrics like HDR-VDP2.2 are known to mimic the human visual system (HVS) very accurately, but its high computational cost prevents its widespread use in large evaluation campaigns. On the other side, computationally cheaper alternatives like PSNR or MSE fail to capture many of the crucial aspects of the HVS. In this work, we try to get the best of the two worlds: we present NoR-VDPNet++, an improved variant of a previous deep learning-based metric for distilling HDR-VDP2.2 into a convolutional neural network (CNN). In this work, we try to get the best of the two worlds: we present NoR-VDPNet++, an improved versionof a deep learning-based metric for distilling HDR-VDP2.2 into a convolutional neural network (CNN).

Close

@incollection{banterle2021nor,
title={NoR-VDPNet++: Efficient Training and Architecture for Deep No-Reference Image Quality Metrics},
author={Banterle, Francesco and Artusi, Alessandro and Moreo, Alejandro and Carrara, Fabio},
booktitle={ACM SIGGRAPH 2021 Talks},
pages={1–2},
year={2021}
}

Close

2021

A. Artusi and K. A. Raftopoulos

A Framework for Objective Evaluation of Single Image De-hazing Techniques

Real-world environment, where images are acquired with digital camera, may be subject to sever climatic conditions such as haze that may drastically reduce the quality performance of sophisticated computer vision algorithms used for various tasks, e.g., tracking, detection, classification etc. Even though several single image de-hazing techniques have been recently proposed with many deep-learning approaches among them, a general statistical framework that would permit an objective performance evaluation has not been independently introduced yet. In this manuscript, certain performance metrics that emphasize different aspects of image quality, output ranges and polarity, are dentified and combined into a single performance indicator derived in an unbiased manner. A general methodology is thus introduced, as a framework for objective performance evaluation of current and future dehazing tasks, through an extensive comparison of 15 single image de-hazing techniques over a vast range of image data sets. The proposed unified framework shows several advantages in evaluating diverse and perceptually meaningful image features but also in elucidating future directions for improvement in image dehazing tasks.

Close

@ARTICLE{9437207,
author={Artusi, Alessandro and Raftopoulos, Konstantinos A.},
journal={IEEE Access},
title={A Framework for Objective Evaluation of Single Image De-Hazing Techniques},
year={2021},
volume={9},
number={},
pages={76564-76575},
doi={10.1109/ACCESS.2021.3082207}}

Close

2021

M. A. Hanif, F. Khalid, R. V. W. Putra, M. T. Teimoori, F. Kriebel, J. Zhang, K. Liu, S. Rehman, T. Theocharides, A. Artusi, S. Garg and M. Shafique

Robust Computing for Machine Learning-Based Systems

The drive for automation and constant monitoring has led to rapid development in the field of Machine Learning (ML). The high accuracy offered by the state-of-the-art ML algorithms like Deep Neural Networks (DNNs) has paved the way for these algorithms to being used even in the emerging safety-critical applications, e.g., autonomous driving and smart healthcare. However, these applications require assurance about the functionality of the underlying systems/algorithms. Therefore, the robustness of these ML algorithms to different reliability and security threats has to be thoroughly studied and mechanisms/methodologies have to be designed which result in increased inherent resilience of these ML algorithms. Since traditional reliability measures like spatial and temporal redundancy are costly, they may not be feasible for DNN-based ML systems which are already super computer and memory intensive. Hence, new robustness methods for ML systems are required. Towards this, in this chapter, we present our analyses illustrating the impact of different reliability and security vulnerabilities on the accuracy of DNNs. We also discuss techniques that can be employed to design ML algorithms such that they are inherently resilient to reliability and security threats. Towards the end, the chapter provides open research challenges and further research opportunities.

Close

@incollection{hanif2021robust,
title={Robust Computing for Machine Learning-Based Systems},
author={Hanif, Muhammad Abdullah and Khalid, Faiq and Putra, Rachmad Vidya Wicaksana and Teimoori, Mohammad Taghi and Kriebel, Florian and Zhang, Jeff Jun and Liu, Kang and Rehman, Semeen and Theocharides, Theocharis and Artusi, Alessandro and others},
booktitle={Dependable Embedded Systems},
pages={479–503},
year={2021},
publisher={Springer, Cham}
}

Close

2020

F. Banterle, A. Artusi, A. Moreo and F. Carrara

NoR-VDPNet: A No-Reference High-Dynamic-Range Quality Metric Trained on HDR-VDP 2

HDR-VDP 2 has convincingly shown to be a reliable metric for image quality assessment, and it is currently playing a remarkable role in the evaluation of complex image processing algorithms. However, HDR-VDP 2 is known to be computationally expensive (both in terms of time and memory) and is constrained to the availability of a ground-truth image (the so-called reference) against to which the quality of a processed imaged is quantified. These aspects impose severe limitations on the applicability of HDR-VDP 2 to realworld scenarios involving large quantities of data or requiring real-time responses. To address these issues, we propose Deep No-Reference Quality Metric (NoR-VDPNet), a deep-learning approach that learns to predict the global image quality feature (i.e., the mean-opinion-score index Q) that HDR-VDP 2 computes. NoR-VDPNet is no-reference (i.e., it operates without a ground truth reference) and its computational cost is substantially lower when compared to HDR-VDP 2 (by more than an order of magnitude). We demonstrate the performance of NoR-VDPNet in a variety of scenarios, including the optimization of parameters of a denoiser and JPEG-XT.

Close

@INPROCEEDINGS{9191202,
author={F. {Banterle} and A. {Artusi} and A. {Moreo} and F. {Carrara}},
booktitle={2020 IEEE International Conference on Image Processing (ICIP)},
title={Nor-Vdpnet: A No-Reference High Dynamic Range Quality Metric Trained On Hdr-Vdp 2},
year={2020},
volume={},
number={},
pages={126-130},
doi={10.1109/ICIP40778.2020.9191202}}

Close

2020

K.Kylili , C. Hadjistassou and A. Artusi

An Intelligent Way for Discerning Plastics at the Shorelines and the Seas

Irrespective of how plastics litter the coastline or enter the sea, they pose a major threat to birds and marine life alike. In this study, an artificial intelligence tool was used to create an image classifier based on a convolutional neural network architecture that utilises the bottleneck method. The trained bottleneck method classifier was able to categorise plastics encountered either at the shoreline or floating at the sea surface into eight distinct classes, namely, plastic bags, bottles, buckets, food wrappings, straws, derelict nets, fish, and other objects. Discerning objects with a success rate of 90%, the proposed deep learning approach constitutes a leap towards the smart identification of plastics at the coastline and the sea. Training and testing loss and accuracy results for a range of epochs and batch sizes have lent credibility to the proposed method. Results originating from a resolution sensitivity analysis demonstrated that the prediction technique retains its ability to correctly identify plastics even when image resolution was downsized by 75%. Intelligent tools, such as the one suggested here, can replace manual sorting of macroplastics from human operators revealing, for the first time, the true scale of the amount of plastic polluting our beaches and the seas.

Close

@article{kylili2020intelligent,
title={An intelligent way for discerning plastics at the shorelines and the seas},
author={Kylili, Kyriaki and Hadjistassou, Constantinos and Artusi, Alessandro},
journal={Environmental Science and Pollution Research},
volume={27},
number={34},
pages={42631–42643},
year={2020},
publisher={Springer}
}

Close

2020

J. Happa and A. Artusi

Studying Illumination and Cultural Heritage

Computer graphics tools and techniques enable researchers to investigate cultural heritage and archaeological sites. They can facilitate documentation of real-world sites for further investigation, and enable archaeologists and historians to accurately study a past environment through simulations. This chapter explores how light plays a major role in examining computer-based representations of heritage. We discuss how light is both documented and modelled today using computer graphics techniques and tools. We also identify why both physical and historical accuracy in modelling light is becoming increasingly important to study the past, and how emerging technologies such as High Dynamic Range (HDR) imaging and physically-based rendering is necessary to accurately represent heritage.

Close

@incollection{happa2020studying,
title={Studying Illumination and Cultural Heritage},
author={Happa, Jassim and Artusi, Alessandro},
booktitle={Visual Computing for Cultural Heritage},
pages={23–42},
year={2020},
publisher={Springer}
}

Close

2019

A. Artusi, F. Banterle, F. Carrara and A. Moreo

Efficient Evaluation of Image Quality via Deep-Learning Approximation of Perceptual Metrics

Image metrics based on Human Visual System (HVS) play a remarkable role in the evaluation of complex image processing algorithms.
However, mimicking the HVS is known to be complex and computationally expensive (both in terms of time and memory), and its usage is thus limited to a few applications and to small input data.
All of this makes such metrics not fully attractive in real-world scenarios. To address these issues, we propose Deep Image Quality Metric (DIQM), a deep-learning approach to learn the global image quality feature (mean-opinion-score). DIQM can emulate existing visual metrics efficiently, reducing the computational costs by more than an order of magnitude with respect to existing implementations.

Close

@ARTICLE{8861304, author={A. {Artusi} and F. {Banterle} and F. {Carra} and A. {Moreno}}, journal={IEEE Transactions on Image Processing}, title={Efficient Evaluation of Image Quality via Deep-Learning Approximation of Perceptual Metrics}, year={2020}, volume={29}, number={}, pages={1843-1855}, abstract={Image metrics based on Human Visual System (HVS) play a remarkable role in the evaluation of complex image processing algorithms. However, mimicking the HVS is known to be complex and computationally expensive (both in terms of time and memory), and its usage is thus limited to a few applications and to small input data. All of this makes such metrics not fully attractive in real-world scenarios. To address these issues, we propose Deep Image Quality Metric (DIQM), a deep-learning approach to learn the global image quality feature (mean-opinion-score). DIQM can emulate existing visual metrics efficiently, reducing the computational costs by more than an order of magnitude with respect to existing implementations.}, keywords={approximation theory;computer vision;feature extraction;image enhancement;learning (artificial intelligence);neural nets;deep-learning approximation;perceptual metrics;human visual system;HVS;image processing algorithms;Deep Image Quality Metric;DIQM;deep-learning approach;visual metrics;image quality feature;Measurement;Image quality;Visualization;Distortion;Indexes;Feature extraction;Convolutional neural networks (CNNs);objective metrics;image evaluation;human visual system;JPEG-XT;and HDR imaging}, doi={10.1109/TIP.2019.2944079}, ISSN={1941-0042}, month={9},}

Close