Workshops

WorkshopDateTimeLocation
Workshop 2: Big Visual Data Analytics (BVDA)Monday, October 28th14:30 - 18:00Capital Suite - 12 B
Workshop 3: SPVis: Security and Privacy of Machine Learning-based Vision Processing in Autonomous SystemsMonday, October 28th8:30 - 12:00 + 14:30 - 18:00Capital Suite - 10
Workshop 4: Embodied AI: Trends, Challenges, and Opportunities – EMAIMonday, October 28th8:30 - 12:00 + 14:30 - 18:00Capital Suite - 12 A
Workshop 5: 2nd Workshop on 3D Computer Vision and Photogrammetry (3DCVP)Monday, October 28th14:30 - 18:00Capital Suite - 20
Workshop 7: Biomedical Imaging & Diagnostics (BID) Workshop: Innovations in Biomarkers, Digital Pathology, & RadiologyTuesday, October 29th8:30 - 12:00 + 14:30 - 18:00Capital Suite - 12 A
Workshop 8: Integrating Image Processing with Large-Scale Language/Vision Models for Advanced Visual UnderstandingTuesday, October 29th14:30 - 18:00Capital Suite - 12 B
Workshop 9: 1st Workshop on Intelligent Crowd Engineering (ICE)Tuesday, October 29th14:30 - 18:00Capital Suite - 20
Workshop 10: Visual and Sensing AI for Smart AgricultureWednesday, October 30th14:30 - 18:00Capital Suite - 12 A
Workshop 11: AI4IPoT: AI for Image Processing Applications on Traffic: Advancements, Challenges, and OpportunitiesWednesday, October 30th14:30 - 18:00Capital Suite - 12 B
Workshop 12: Analysis of OCT Signals and Images: From Signal Formation to Practical ApplicationsWednesday, October 30th8:30 - 12:00 + 14:30 - 18:00Capital Suite - 10

Workshop 2: Big Visual Data Analytics (BVDA)
Organizers: Ioannis Pitas (Aristotle University of Thessaloniki, Greece), Massimo Villari (University of Messina, Italy), Ioannis Mademlis (Harokopio University of Athens, Greece)
Session Chair: Ioannis Pitas (Aristotle University of Thessaloniki, Greece)
Date and Time: Monday, October 28th, 14:30-18:00
Location: Capital Suite – 12 B

Website: https://icarus.csd.auth.gr/cfp-bvda-icip24-workshop/
Session Chair: Ioannis Pitas (Aristotle University of Thessaloniki, Greece)

The ever-increasing visual data availability leads to repositories or streams characterized by big data volumes, velocity (acquisition and processing speed), variety (e.g., RGB or RGB-D or hyperspectral images) and complexity (e.g., video data and point clouds). Such big visual data necessitate novel and advanced analysis methods, in order to unlock their potential across diverse domains. The “Big Visual Data Analytics” (BVDA) workshop aims to explore this rapidly evolving field that encompasses cutting-edge methods, emerging applications, and significant challenges in extracting meaning and value from large-scale visual datasets. From high-throughput biomedical imaging and autonomous driving sensors to satellite imagery and social media platforms, visual data has permeated nearly every aspect of our lives. Analyzing this data effectively requires efficient tools that go beyond traditional methods, leveraging advancements in machine learning, computer vision and data science. Exciting new developments in these fields are already paving the way for fully and semi-automated visual data analysis workflows at an unprecedented scale. This workshop will provide a platform for researchers and practitioners to discuss recent breakthroughs and challenges in big visual data analytics, explore novel applications across diverse domains (e.g., environment monitoring, natural disaster management, robotics, urban planning, healthcare, etc.), as well as for fostering interdisciplinary collaborations between computer vision, data science, machine learning, and domain experts. Its ultimate goal is to help identify promising research directions and pave the way for future innovations.

TimeEvent
14:30 - 15:30Invited lecture by S. Battiato “Generative AI and Impostor Bias: Innovations, Challenges, and Future Directions in Big Visual Data”
15:30 - 16:00Forest Fire Image Classification through Decentralized DNN Inference - Dimitrios Papaioannou, Vasileios Mygdalis, Ioannis Pitas
16:00 - 16:30Coffee Break
16:30 - 17:00Efficient Data Utilization in Deep Neural Networks for Inference Reliability - Ioanna Valsamara, Christos Papaioannidis, Ioannis Pitas

Workshop 3: SPVis: Security and Privacy of Machine Learning-based Vision Processing in Autonomous Systems
Organizers: Muhammad Shafique, Bassem Ouni, Michail Maniatakos, Ozgur Sinanoglu, Christina Pöpper (New York University, Abu Dhabi, UAE), Nasir Memon (New York University, Shanghai, China)
Date and Time: Monday, October 28th, 8:30-12:00 + 14:30-18:00
Location: Capital Suite – 10

Website: https://wp.nyu.edu/spvis_workshop_icip2024/

In the era of growing cyber-security threats and nano-scale devices, the intelligent camera- based features of smart cyber physical systems (CPS, like autonomous vehicles) and Internet- of-Things (IoT) face new types of attacks and security/privacy threats on the image/video data, requiring novel design principles for robust ML. Besides IP-stealing and data privacy attacks, the foremost threats in this direction to the robustness of modern ML-systems operating on the image/video data are adversarial and backdoor attacks, which are characterized by deliberate and carefully crafted manipulations in the images, exploit inherent vulnerabilities in machine/deep learning models and learning mechanisms, potentially leading to compromised performance and decision-making. Safeguarding against these security and privacy threats has become crucial, requiring continuous advancements in defense and obfuscation strategies to strengthen the resilience of intelligent systems in diverse image/video processing applications and computer vision. This workshop aims to bring together experts, researchers, and practitioners in image/vision processing and machine learning security/privacy to discuss the latest advancements, challenges, and solutions in the critical domain of adversarial machine learning, backdoors, DNN obfuscation, attacks on visual forensics, deepfake detectors for images/videos, etc.

TimeEvent
8:30 - 9:15Opening Remarks - Muhammad Shafique
An Introduction of Security and Privacy in ML-Based Vision Processing for Autonomous Systems
9:15 - 10:00Keynote - Battista Biggio: Machine Learning Security: Are We There Yet?
10:00 - 10:30Coffee Break
10:30 - 11:00Keynote - Ernesto Damiani: Making ML-based malware detection robust against elusive actions
11:00 - 12:00Regular Paper Session 1
11:00 - 11:20Prashant Kumar - Slack: Attacking Lidar-Based Slam with Adversarial Point Injections
11:20 - 11:40Amira Guesmi - Exploring the Interplay of Interpretability and Robustness in Deep Neural Networks: A Saliency-Guided Approach
11:40 - 12:00Nandish Chattopadhyay - Investigating Spatially Correlated Patterns in Adversarial Images
12:00 - 14:30ICIP 2024 Plenary and Lunch break
14:30 - 15:30Regular Paper Session 2
14:30 - 14:50Imanol Solano - Safl: Sybil-Aware Federated Learning with Application to Face Recognition
14:50 - 15:10Andrea Ciamarra - Detecting Deepfakes Through Inconsistencies in Local Camera Surface Frames
15:10 - 15:30Walid El Maouaki - RobQuNNs: A Methodology for Robust Quanvolutional Neural Networks against Adversarial Attacks
15:30 - 16:00Panel Discussion
16:00 - 16:30Coffee Break
16:30 - 17:00Interactive Discussion with Workshop Participants
17:00 - 17:30Invited Talk (online) – Furong Huang: Crafting and Cracking AI in the Shadows of Language - Poison Data and Jailbreak Prompts for LLMs
17:30 - 18:00Invited Talk (online) – Farshad Khorrami: Attacks and Defenses for Deep Neural Networks with Applications to Autonomous Vehicles

Workshop 4: Embodied AI: Trends, Challenges, and Opportunities – EMAI
Organizers: Yi Fang, Hao Huang, Yu-Shen Liu, Tuka Waddah Alhanai, Shuaihang Yuan, Yu Hao (New York University Abu Dhabi, Tsinghua University)
Date and Time: Monday, October 28th, 8:30-12:00 + 14:30-18:00
Location: Capital Suite – 12 A

Web site: https://emai-workshop.github.io/

The “Embodied AI: Exploring Trends, Challenges, and Opportunities” workshop at ICIP 2024 in Abu Dhabi, UAE, is an expansive forum dedicated to the intersection of Embodied AI and fields such as computer vision, language processing, graphics, and robotics. This workshop is designed to deepen the understanding of AI agents’ capabilities in perceiving, interacting, and reasoning within their environments, thereby fostering an interdisciplinary dialogue among leading researchers and practitioners. Attendees can expect a comprehensive agenda including insightful invited talks from eminent figures in the field, a poster session showcasing cutting-edge research, and engaging panel discussions aimed at debating the future directions of intelligent, interactive systems. This event promises to be a pivotal gathering for those keen to contribute to and shape the ongoing advancements in Embodied AI.

The dedicated workshop on Embodied AI is essential due to its unique focus on integrating physical embodiment with AI capabilities, addressing challenges and opportunities not fully explored in the main ICIP conference. It merges computer vision and robotics, pushing beyond traditional boundaries to create agents that perceive, interact, and reason within their environments. This specialized forum encourages cross-disciplinary collaboration, fostering advancements that are vital for the development of intelligent, interactive systems, and addressing the gap between current image processing techniques and the future needs of AI research, including foundation models, robotics, and embodied intelligence.

The EMAI 2024 workshop stands at the confluence of Embodied AI and pivotal areas such as computer vision, natural language processing, graphics, and robotics. This synthesis is poised to catalyze significant momentum in the field, by bringing the frontier of foundation models, robotics, and embodied AI to the research community.

TimeEvent
8:30 - 9:00Opening Remarks - Yi Fang
9:00 - 9:30Multilingual Multimodal LLMs for Seamless Human-Robot Interaction - Hisham Cholakkal
9:30 - 10:00Robot Imagination: Affordance Reasoning via Physical Simulation - Gregory Chirikjian
10:00 - 10:30Coffee Break
10:30 - 11:00Scene Understanding for Safe and Autonomous Navigation - Amit K. Roy-Chowdhury
11:00 - 11:30Neuromorphic Computing Architectures for Artificial Perception - Jorge Dias
11:30 - 12:00Vision-Language Models and Robotics for Climate Action - Maryam Rahnemoonfar
12:00 - 14:30ICIP 2024 Plenary and Lunch break
14:30 - 15:00Towards Efficient Vision-Language Navigation - Xiaojun Chang
15:00 - 15:30Data-Centric Approaches to Advancing Embodied AI - Zhiqiang Shen
15:30 - 16:00To Enable Multimedia Machines to Perceive and Act as Humans Do - Weisi Lin
16:00 - 16:30Coffee Break
16:30 - 17:00Large-scale Heterogeneous Scene Modelling and Editing - Dan Xu
17:00 - 17:30Visual Human Motion Analysis - Li Cheng
17:30 - 18:00Flexible Modality Learning: Modeling Arbitrary Modality Combination via the Mixture-of-Expert Framework - Tianlong Chen

Workshop 5: 2nd Workshop on 3D Computer Vision and Photogrammetry (3DCVP)
Organizers: Lazaros Grammatikopoulos, Elli Petsa, Giorgos Sfikas, Andreas el Saer (University of West Attica, Greece), George Retsinas (National Technical University of Athens, Greece), Christophoros Nikou, Panagiotis Dimitrakopoulos (University of Ioannina, Greece)
Date and Time: Monday, October 28th, 14:30-18:00
Location: Capital Suite – 20

Web site: https://3dcvp.uniwa.gr/

Photogrammetry and Computer Vision are two major fields with a significant overlap with Image Processing. We plan to invite presentations of high-quality works on topics that will include novel developments over classic Photogrammetric Computer Vision problems such as Structure from motion and SLAM, as well as papers with a focus on novel learning techniques over 3D geometric data. Topics of interest will include feature extraction, description and matching, multispectral and hyperspectral image processing and fusion, Multi-View Reconstruction and Surface Reconstruction, 3D point cloud analysis and processing, scene understanding, robot vision and perception, path and motion planning.

TimeEventPresenters
14:30 - 14:50An image to 3D cross-modal approach for real-time 3D Human Posture EstimationSwapna Agarwal, Tata Consultancy Services Limited, Aniruddha Sinha, Tata Consultancy Services Limited, Avik Ghose, Tata Consultancy Services Limited
14:50 - 15:10Exploitation of open source datasets and Deep Learning Models for the Detection of objects in Urban AreasElpida Gkouvra, National Technical University of Athens, Thodoris Betsas, National Technical University of Athens, Maria Pateraki, National Technical University of Athens
15:10 - 15:30A Robust Skeleton lines extraction method for individual tree modeling using terrestrial LiDAR point cloudsZhenyang Hui, East China University of Technology, Yating He, East China University of Technology, Yuanping Xia, East China University of Technology, Ting Hui, Guangdong AIB Polytechnic, Yuxin Xia, East China University of Technology
15:30 - 15:50Invited paper: A Comparative Assessment of LiDAR-SLAM Approaches Using Terrestrial Laser Scanner DataManolis Taoulai, University of West Attica, Maria Petsa, University of West Attica, Andreas El Saer, University of West Attica, Christophoros Nikou, University of Ioannina, Lazaros Grammatikopoulos, University of West Attica
16:00 - 16:30Coffee Break
16:30 - 16:50ISOGAN: A GAN-based method for isometric view images generation from three orthographic views contour drawingsThao Nguyen Phuong, FPT Consulting Japan, Hidetomo Sakaino, FPT Consulting Japan, Vinh Nguyen Duy, FPT Consulting Japan
16:50 - 17:10Underwater Archaeological Object Detection through bidirectional photogrammetric fusionEthan Zammit, University of Malta, Dylan Seychell, University of Malta, Carl James Debono, University of Malta, Timmy Gambin, University of Malta, John Wood, University of Malta
17:10 - 17:30Metasplats: Rapid Sparse 2D view to 3D novel view synthesisJoshna Manoj Reddy, Metashop Private Limited, Mukul Ingle, Metashop Private Limited
17:30 - 17:45Closing remarks & Discussions

Workshop 7: Biomedical Imaging & Diagnostics (BID) Workshop: Innovations in Biomarkers, Digital Pathology, & Radiology
Organizers: Arash Mohammadi (Concordia University, Canada) and Ervin Sejdic (University of Toronto, Canada)
Date and Time: Tuesday, October 29th, 8:30-12:00 + 14:30-18:00
Location: Capital Suite – 12 A

Website: http://i-sip.encs.concordia.ca/bidicip2024/

Recent advances in various Artificial Intelligence (AI) approaches have spurred the growth of Biomedical Image Processing applications. We already have algorithms that are deployed in hospital settings to aid clinicians with various tasks when it comes to the analysis of medical images. The main motivation behind the IEEE BID workshop is to introduce various recent image processing advances relevant to healthcare needs. Workshop’s overreaching objective is bringing engineers, computer scientists, and clinicians together to discuss main issues in this field. More specifically, while the main conference will focus on advances in image processing, the workshop will solely consider advances relevant to clinical applications such as biomarker discovery, pathology and radiology. The workshop is designed to initiate a broader conversation between the theoreticians in the field and researchers focused on the practical applications of image processing in medicine and physiology. We aim to address the main issues associated with biomedical image processing such as lack of large and annotated date sets, performance degradation of AI algorithms in real-world settings, and the need for reproducible research in the field. Lastly, the proposed workshop aims to become a future meeting ground for researchers interested in the applications of image processing in various clinical settings. Topics of interest include but not limited to:

  • Biomedical Imaging Diagnostics and Prognostics
  • Image-Based Biomarker Discovery
  • Robust Image Processing Techniques with Limited Datasets
  • (Semi)Autonomous Labeling for Medical Imaging Data
  • Navigating the Variability in Medical Imaging
  • The Human-AI Collaboration
  • Personalized Medicine and Image Processing
  • AI-Driven Multi-Modal Fusion Frameworks
  • Precision Labeling in Medical Imaging
TimeEvent
8:45 - 9:00Opening Remarks
9:00 - 10:00Keynote Speech (TBA)
10:00 - 10:30Coffee Break
10:30 - 11:15Transforming Tabular Data for Multi-modality: Enhancing Breast Cancer Metastasis Prediction - Faseela Abdullakutty, Younes Akbari, Ahmed Bouridane, and Rifat Hamoudi
11:15 - 12:00Democratizing MLLMs in Healthcare: Tinyllava-med for Efficient Healthcare Diagnostics in Resource-constrained settings - Aya El Mir, Lukelo Thadei Luoga, Boyuan Chen, Muhammad Abdullah Hanif, and Muhammad Shafique
12:00 - 14:30ICIP 2024 Plenary and lunch break
14:30 - 15:15An Automated Framework for Pneumonia Severity Scoring on Chest Radiographs: A Transfer Learning and Multi-task Learning Approach - Nastaran Enshaei and Farnoosh Naderkhani
15:15 - 16:00A Federated Learning Scheme for Neuro-Developmental Disorders: Multi-Aspect ASD Detection - Safa Otoum, Azzam Mourad, and Hala Shamseddine
16:00 - 16:30Coffee Break
16:30 - 17:15Q-Net: A Quantitative Susceptibility Mapping-based Deep Neural Network for Differential Diagnosis of Brain Iron Deposition in Hemochromatosis - Soheil Zabihi, Elahe Rahimian, Sadaf Khademi, Soumya Sharma, Sean K. Sethi, Sara Gharabaghi, Amir Asif, E. Mark Haacke, Mandar S. Jog, and Arash Mohammadi

Workshop 8: Integrating Image Processing with Large-Scale Language/Vision Models for Advanced Visual Understanding
Organizers: Yong Man Ro (KAIST, South Korea) ; Hak Gu Kim (Chung-Ang University, South Korea) ; Nikolaos Boulgouris (Brunel University London, UK)
Date and Time: Tuesday, October 29th, 14:30-18:00
Location: Capital Suite – 12 B

Website: https://carai.kaist.ac.kr/lvlm

This workshop aims to bridge the gap between conventional image processing techniques and the latest advancements in large-scale models (LLM and LVLM). In recent years, the integration of large-scale models into image processing tasks has shown significant promise in improving visual object understanding and image classification. This workshop will provide a platform for researchers and practitioners to explore the synergies between conventional image processing methods and cutting-edge large language model and large vision language models, fostering innovation and collaboration in the field.

Objectives

  1. Explore the foundations of image processing techniques with large-scale models.
  2. Investigate the current landscape of large-scale language/vision models and their capabilities.
  3. Discuss challenges and opportunities in integrating large-scale models with image processing to enhance visual understanding.
  4. Showcase practical examples and case studies where the combined approach has yielded superior results.

This workshop is designed for researchers, academics, and industry professionals working in the fields of image processing, computer vision, multimedia processing and natural language processing. Participants should have a basic understanding of image processing concepts and an interest in exploring innovative approaches for visual understanding. The workshop will consist of paper presentations by leading experts in image processing and large-scale language/vision models. Participants will have the opportunity to engage in discussions, exchange insights, and collaborate on potential research projects.

TimeEvent
14:30 - 14:40Opening
14:40 - 15:00Unveiling The Potential Of Multimodal Large Language Models For Scene Text Segmentation Via Semantic-Enhanced Features - Hyung Kyu Kim (Chung-Ang University)
15:00 - 15:20Improving T2i-Adapter Via Integration Of Visual And Textual Conditions With Attention Mechanism - Chen-Kuo Chiang (National Chung Cheng University)
15:20 - 15:40Retrieval-Augmented Natural Language Reasoning for Explainable Visual Question Answering - Hyeon Bae Kim (Kyung Hee University)
15:40 - 16:00Enhancing Daily Reports: Application Of Multimodal Gpt For Image Description Generation In Construction Site Quality Assurance - Chuan-Yu Chang (National Yunlin University of Science and Technology)
16:00 - 16:30Coffee Break
16:30 - 17:00DEMO & Discussion
17:00 - 17:20LEAP:D - A Novel Prompt-based Approach for Domain-Generalized Aerial Object Detection - Chanyeong Park (Chung-Ang University)
17:20 - 17:40Disturbing Image Detection Using Lmm-Elicited Emotion Embeddings - Vasileios Mezaris (CERTH)
17:40 - 18:00Revisiting Misalignment in Multispectral Pedestrian Detection: A Language-Driven Approach for Cross-modal Alignment Fusion - Youngjoon Yu (KAIST)

Workshop 9: 1st Workshop on Intelligent Crowd Engineering (ICE)
Organizers: Baek-Young Choi (University of Missouri – Kansas City,  USA),  Khalid Almalki (Saudi Electronic University, Saudi Arabia), Muhammad Mohzary (Jazan University, Jazan, Saudi Arabia), Sejun Song (Augusta University, GA, USA)
Date and Time: Tue
sday, October 29th, 14:30-18:00
Location: Capital Suite – 20

Website: https://sites.google.com/view/ice-workshop/

Crowd events such as festivals, concerts, shopping, sports, political (protests), and religious events (Hajj or Kumbha Mela) are a significant part of modern human society that can occur anywhere and at any time. Unfortunately, human casualties due to chaotic stampedes at crowd events and the transmission of infectious diseases, exemplified by the COVID-19 pandemic, however, implicate pervasive deficiency and call for effective crowd safety control and management mechanisms.

Machine Learning (ML) methodologies have been applied to crowd counting and density estimation, drawing inspiration from advancements in computer vision and video surveillance paradigms. These technological interventions are strategically designed to mitigate the risk of personal injuries and fatalities amidst densely populated gatherings, including political rallies, entertainment events, and religious congregations. Despite these advancements, contemporary crowd safety management frameworks must be more prominent in their precision, scalability, and capability to perform nuanced crowd characterization in real-time. These deficiencies encompass challenges such as detailed group dynamics analysis, the assessment of occlusions’ impacts, and the execution of adequate mobility, contact tracing, and social distancing strategies.

This inaugural Intelligent Crowd Engineering (ICE) workshop is to bring together eminent scientists, researchers, and engineers, to present and discuss novel crowd safety challenges, broaching cutting-edge topics, and unveiling emerging technologies that transcend conventional crowd-counting methodologies. The ICE calls for diverse technological spectrums, including ML, Artificial Intelligence (AI), and the Internet of Things (IoT), alongside social modeling and integration frameworks to substantially ameliorate crowd safety management systems’ precision, scalability, and real-time operational efficacy.

This workshop plans to integrate comprehensive approaches encompassing pivotal aspects of crowd engineering including but not limited to:

  • Trustworthy visual data processing and knowledge processing
  • An IoT-enabled mobility characterization 
  • An ML-augmented video surveillance 
  • Semantic information-driven application support
TimeEvent
14:30 - 15:20Opening Remark and Keynote

Keynote: Intelligent Crowd Engineering: Challenges and Opportunities, Baek-Young Choi, Ph.D., University of Missouri – Kansas City, USA
15:20 - 16:00Invited Talks Session 1

Diffusion Models in Gait Recognition, Zhu Li, Ph.D., University of Missouri – Kansas City, USA

A novel approach for Predicting Mobility in Crowd Scenes, Sejun Song, Ph.D., Augusta University, USA
16:00 - 16:30Coffee Break
16:30 - 17:10Invited Talks Session 2

AI-Powered Smart Solutions for Crowd Management: The Future of Intelligent Crowd Engineering, Khalid Almalki, Ph.D. Saudi Electronic University, Saudi Arabia

Leveraging Large Language Models for Intelligent Crowd Engineering: Enhancing Real-time Decision-Making and Behavioral Insights, Muhammad Mohzary, Ph.D. Jazan University, Saudi Arabia
17:10 - 18:00Panel Discussion

Future of Crowd Safety: Integrating AI, IoT, and Social Modeling for Predictive and Preventive Public Safety

Addressed topics include:

  • How advanced AI and IoT technologies can predict crowd-related disasters and prevent incidents like stampedes.
  • The use of social modeling and real-time data for better crowd behavior forecasting and management.
  • Challenges in deploying scalable and accurate real-time systems for large events.
  • Ethical and privacy concerns in using AI for crowd surveillance and disease tracking.

Workshop 10: Visual and Sensing AI for Smart Agriculture
Organizers: Yuxing Han, Fengqing Maggie Zhu, Yubin Lan
Date and Time: Wednesday, October 30th, 14:30-18:00
Location: Capital Suite – 12 A

Website: https://drokish.com/2024/04/14/ICIP-2024-Workshop-on-Visual-and-Sensing-AI-for-Smart-Agriculture/

In recent years, sensor networks, drones, IoT technologies have been introduced to improve many aspects of agricultural practices, including but are not limited to energy efficiency, environmental friendliness, and food healthiness, necessitating advanced signal processing techniques and systems that meet the many challenges in agricultural applications under strict cost, power consumption, weather-proof, and other constraints. Furthermore, with revolutionary advancements in deep learning and AI, smart agriculture is at the eve of explosive growth with its impact felt on a global scale. At this time juncture, we propose to ICIP 2024, a dedicated workshop bringing together leading experts in smart agriculture and signal processing for smart agriculture, as well as broad audience from the signal processing community in general, for an in-depth face to face presentation and discussion of the signal

processing challenges in smart agriculture applications, as well as the state-of-the-art, thereby improving the awareness by the image processing and signal processing community to critical and promising research areas and challenges pertaining to one of the most fundamental and ancient practices of human society. The workshop will include presentations by experts from the drone, smart agriculture and signal processing societies, presentations of papers, and a panel for curated open discussions.

TimeEvent
14:30 - 14:40Opening
14:40 - 16:00Session 1: Keynote speeches
14:40 - 15:00Precision Agriculture Aviation in the past 10 years - Prof. Yubin Lan
15:00 - 15:20Dietary Assessment in the Era of Artificial Intelligence - Prof. Maggie Zhu
15:20 - 15:40An efficient transformer network for detecting multi-scale chicken using hyperspectral imaging systems - Prof. Yuxing Han
15:40 - 16:00A lightweight and efficient visual detection model for nutrient buds in complex tea plantation environments - Prof. Junshu Wang
16:00 - 16:30Coffee Break
16:30 - 17:50Session 2: Oral Presentations
16:30 - 16:50Perceiving Aroma and Taste: Intelligent sensors for Volatile Compounds from Agricultural Products - Prof. Daming Dong
16:50 - 17:10AI-driven unmanned grassland inspection and investigation - Prof. Tianyi Wang
17:10 - 17:30Application of Machine Learning and Piezoelectric Effect in Agricultural Mechanization Harvesting Equipment - Dr. Yibo Li
17:30 - 17:50Privacy inference and data forgetting federated learning for smart agriculture - Dr. Gongxi Zhu
17:50 - 18:00Summary and Closing

Workshop 11: AI4IPoT: AI for Image Processing Applications on Traffic: Advancements, Challenges, and Opportunities
Organizers: Xian Zhong, Wenxin Huang, Zheng Wang, Yang Ruan, Chia-Wen Lin, Alex Kot
Date and Time: Wednesday, October 30th, 14:30-18:00
Location: Capital Suite – 12 B

Website: https://ai4ipot.github.io/

This workshop is designed as a leading forum to explore complex challenges and opportunities in intelligent traffic systems. It covers diverse topics, including safety under adverse weather conditions, advanced scene reconstruction, visual perception for autonomous driving, multimodal sensor fusion, and behavioral analysis of traffic violations. The session aims to bring together pioneers from image processing and artificial intelligence to explore large models within intelligent transportation and autonomous driving systems. This forum encourages rich discussions on AI’s role in surveillance and driver assistance, focusing on both large models and green computing strategies. The workshop facilitates dialogue between academics and industry experts to promote new ideas and explore new directions in autonomous driving and traffic management. Topics include, but are not limited to:

  1. Scene Reconstruction and Visual Perception

Advanced techniques for scene reconstruction and visual perception to improve accuracy in intelligent transportation and autonomous driving systems.

  1. Multimodal Sensor Fusion

Strategies for integrating data from multimodal sensors to enhance environmental analysis and image quality in traffic systems.

  1. Safety Transportation under Adverse Weather Conditions

Investigation of technologies that ensure vehicle safety under adverse weather conditions.

  1. AI-Driven Traffic Management Systems

Use of large models to optimize traffic flow and enhance functionalities in autonomous driving systems.

  1. Sustainable and Lightweight Traffic Technologies

Development of sustainable traffic systems using green computing and lightweight processing to reduce energy consumption.

  1. Behavioral Analysis for Driving and Traffic Violations 

Analytical technologies for studying driving behaviors and detecting traffic violations.

TimeEvent
14:30 - 14:35Opening Remarks and Introduction
14:35 - 15:00Keynote 1: Review of Low-Quality Image Processing Research - Prof. Xian Zhong
15:00 - 16:00Oral Presentations Session 1
15:00 - 15:20City Traffic Aware Multi-Target Tracking Prediction with Multi-Camera - Kanglei Peng, Tuo Dong, Wanqin Zhang, Jianhui Zhang
15:20 - 15:40Dynamic Task-oriented Prompting for Generalized Aerial Category Discovery - Muli Yang, Huaiyuan Qin, Hongyuan Zhu
15:40 - 16:00Light in the Dark: Cooperating Prior Knowledge and Prompt Learning for Low-Light Action Recognition - Qi Liu, Shu Ye
16:00 - 16:30Coffee Break
16:30 - 17:10Oral Presentations Session 2
16:30 - 16:50Robust ADAS: Enhancing Robustness of Machine Learning-Based Advanced Driver Assistance Systems for Adverse Weather - Muhammad Zaeem Shahzad, Muhammad Abdullah Hanif, Muhammad Shafique
16:50 - 17:10Scenario-Adaptive Object Detection with Dual-branch Feature Pyramid Networks - Tuo Dong, Yining Gu, Zhaoxu Zheng, Jianhui Zhang
17:10 - 17:50Keynote 2: Visual Computing in Varying Resolution, Illumination, and Resource Conditions - Weisi Lin
17:50 - 18:00Award Session and Closing Remarks

Workshop 12: Analysis of OCT Signals and Images: From Signal Formation to Practical Applications
Organizers: Taimur Hassan (Abu Dhabi University, United Arab Emirates), Azhar Zam (New York University, Abu Dhabi), Naoufel Werghi (Khalifa University, UAE), Lev Matveev (OpticElastograph LLC), Alex Vitkin (University of Toronto, Canada)
Date and Time: Wednesday, October 30th, 8:30-12:00 + 14:30-18:00
Location: Capital Suite – 10

Website: https://taimurhassan.github.io/octworkshop/

Optical Coherence Tomography (OCT) is a well-established technique for retinal diagnostics, now expanding into non-ophthalmological fields such as dermatology, oncology, mucosal tissue diagnostics and more. OCT boasts mesoscopic resolution, implying that the resolution volume encompasses only a few cells. It also offers a deeper penetration depth (approximately a few millimeters) compared to microscopy, effectively bridging the gap between ultrasound and microscopy. The OCT signal features are highly sensitive to the properties of sub-resolved optical scatterers. For example, speckle pattern parameters correlate with scatterer concentrations and scatterer clustering, while speckle variance is sensitive to scatterer motion, such as blood flow, among other features. Moreover, OCT can be made sensitive to optical phase and polarization. These features are enhanced by enabling AI. By combining all these technology development components into a seamless pipeline, one can achieve high-performance diagnostics and perform optical biopsy and virtual histology. Leveraging these advancements, numerous OCT applications have emerged over the recent decade. The majority of these approaches and applications are based on specific OCT images and signal processing. This workshop comprises four subsequent panels and is dedicated to the whole pipeline from OCT technology development and signal formation, through signal features and preprocessing, to AI-enabling and practical implementations and applications.

TimeEvent
8:30 - 9:15Usman Akram, National University of Sciences and Technology
“Leveraging the Power of Generative AI and Transformers for the Analysis of Unstained Tissue Samples” (AI track) [Live cast to auditorium from Pakistan]
[Starting fully in-person sessions]
9:15 - 10:00Taimur Hassan, Abu Dhabi University
“Application of OCT for automated retinopathy analysis” (AI track)
10:00 - 10:30Coffee Break
10:30 - 11:35Brendan Kennedy, University of Western Australia
Keynote talk: “Signal processing in compression optical coherence elastography: From phase maps to stiffness images”
11:35 - 12:15Lev Matveev, Russian Academy of Sciences
“Digital phantoms for optical coherence tomography: from signal features evaluation to creation of novel signal processing tools”
12:15 - 14:30ICIP 2024 Plenary and Lunch Break
14:30 - 15:15Alex Vitkin, University of Toronto
“OCT for functional imaging: practical implementations and applications”
15:15 - 16:00Azhar Zam, New York University Abu Dhabi (NYUAD)
“OCT-based feedback system for smart laserosteotome”
16:00 - 16:40Coffee Break | Discussion and Networking Session
16:40 - 18:00Inclusive Global Online Session" with talks:
16:40 - 17:05Konstantin Yashin, Privolzhsky Research Medical University Clinic
“OCT for brain tissue structural imaging and surgery navigation”
17:05 - 18:00MathWorks open online tutorial: “MATLAB Medical Imaging Toolbox Model for Medical Segment Anything (MedSAM)” (AI track)