Award

Thank you for attending HASCA2024!
This year's HASCA successfully finished.

We had great presentations and papers.
Among them, through participants' voting, we give following awards:

  • Best Paper
  • Best Presentation

This year, both best paper and best peresentation awards go to:
PrISM: Unified Framework for Task Assistants powered by Multimodal Human Activity Recognition
Riku Arakawa (Carnegie Mellon University), Mayank Goel (Carnegie Mellon University)



Call for Contributions

We are pleased to announce that the HASCA (Human Activity Sensing Corpus and Applications) Workshop will take place as part Ubicomp2024.
HASCA is one of the largest workshops in Ubicomp, it has been held over 12 years.

Dates

Submission Deadline: Jun. 7, 2024 Jun. 14, 2024 (extended)
Acceptance Notification: Jul. 5, 2024
Camera-ready: Jul. 19, 2024
Workshop: Oct. 5th, 2024

SUMMARY

The objective of this workshop is to share the experiences among
researchers about current challenges of real-world activity
recognition with newly developed datasets and tools, breaking through
towards open-ended contextual intelligence.

This workshop discusses the challenges of designing reproducible
experimental setups, the large-scale dataset collection campaigns, the
activity and context recognition methods that are robust and adaptive,
and evaluation systems in the real world.

As a special topic of this year we will reflect on the challenges to
recognize situations, events and/or activities among the statically
predefined pools and beyond - which is the current state of the art -
and instead we will adopt an "open-ended view" on activity and context
awareness. This may result in combinations of the automatic discovery
of relevant patterns in sensor data, the experience sampling and
wearable technologies to unobtrusively discover the semantic meaning
of such patterns, the crowd-sourcing of dataset acquisition and
annotation, and new "open-ended" human activity modeling techniques.

CALL FOR CONTRIBUTIONS

- *Data collection*, *Corpus construction*.
Experiences or reports from data collection and/or corpus construction
projects, including papers which describes the formats, styles and/or
methodologies for data collection. Cloud-sourcing data collection and
participatory sensing also could be included in this topic.

- *Effectiveness of Data*, *Data Centric Research*.
There is a field of research based on the collected corpora, which is
so called "data centric research". Also, we call for the experience of
using large-scale human activity sensing corpora. Using large-scale
corpora with an analysis by machine learning, there will be a large
space for improving the performance of recognition results.

- *Tools and Algorithms for Activity Recognition*.
If we have appropriate tools for the management of sensor data,
activity recognition researchers could have more focused on their
actual research theme. This is because the developed tools and
algorithms are often not shared among the research community. In this
workshop, we solicit reports on developed tools and algorithms for
forwarding to the community.

- *Real World Application and Experiences*.
Activity recognition "in the lab" usually works well. However, it does
not scale well with real world data. In this workshop, we also solicit
the experiences from real world applications. There is a huge gap
between "lab" and "real world” environments . Large-scale human
activity sensing corpora will help to overcome this gap.

- *Sensing Devices and Systems*.
Data collection is not only performed by the "off-the-shelf" sensors
but also the newly developed sensors which supply information which
has not been investigated. There is also a research area about the
development of new platform for data collection or the evaluation
tools for collected data.

In light of this year's special emphasis on open-ended contextual
awareness, we wish cover these topics as well:

- *Mobile Experience Sampling*, *Experience Sampling Strategies*.
Advances in experience sampling approaches, for instance intelligent
user query or those using novel devices (e.g. smartwatches), are
likely to play an important role to provide user-contributed
annotations of their own activities.

- *Unsupervised Pattern Discovery*.
Discovering meaningful patterns in sensor data in an unsupervised
manner can be needed in the context of informing other elements of the
system by inquiring the user and by triggering the annotation with
crowd-sourcing.

- *Dataset Acquisition and Annotation*, *Crowd-Sourcing*, *Web-Mining*.
A wide abundance of sensor data is potentially within the reach of
users instrumented with their mobile phones and other
wearables. Capitalizing on crowd-sourcing to create larger datasets in
a cost effective manner may be critical to open-ended activity
recognition. Many online datasets are also available and could be used
to bootstrap recognition models.

- *Transfer Learning*, *Semi-Supervised Learning*, *Lifelog Learning*.
The ability to translate recognition models across modalities or to
use minimal forms of supervision would allow to reuse datasets in a
wider range of domains and reduce the costs of acquiring annotations.

- *Deep Learning*.
Together with the big success of deep learning in other AI domain, deep
learning models are gradually playing an important role in activity
recognition as well.

AREAS OF INTEREST

  • Human Activity Sensing Corpus
  • Large Scale Data Collection
  • Data Validation
  • Data Tagging / Labeling
  • Efficient Data Collection
  • Data Mining from Corpus
  • Automatic Segmentation
  • Performance Evaluation
  • Man-machine Interaction
  • Noise Robustness
  • Non Supervised Machine Learning
  • Sensor Data Fusion
  • Tools for Human Activity Corpus/Sensing
  • Participatory Sensing
  • Feature Extraction and Selection
  • Context Awareness
  • Pedestrian Navigation
  • Social Activities Analysis/Detection
  • Compressive Sensing
  • Sensing Devices
  • Lifelog Systems
  • Route Recognition/Detection
  • Wearable Application
  • Gait Analysis
  • Health-care Monitoring/Recommendation
  • Daily-life Worker Support
  • Deep Learning

FORMAT & TEMPLATE

The paper must be in 6 pages including references in the 2-column format.

ACM requires UbiComp/ISWC 2024 workshop submissions to use the double-column template. Please note that the template for submission is double-column format and the template for publication (camera-ready) is in single-column.
Please carefully read Ubicomp website about the template.

Submissions do not need to be anonymous.
All publications will be peer reviewed together with their contribution to the topic of the workshop.
The accepted papers will be published in the UbiComp/ISWC 2024 adjunct proceedings, which will be included in the ACM Digital Library.

SUBMISSION

Please submit your papers from https://new.precisionconference.com/submissions
Make a new submission as follows:

  1. Society: SIGCHI
  2. Conference/Journal: UbiComp/ISWC 2024
  3. Tack: UbiComp/ISWC 2024 12th Workshop on HASCA
  4. "Go" button

IMPORTANT DATES

Full research/short technical papers:

  • Submission Deadline: Jun. 7, 2024 Jun. 14, 2024 (extended)
  • Acceptance Notification: Jul. 5, 2024
  • Camera-ready: Jul. 19, 2024
  • Workshop: Oct. 5 2024

SPECIAL SESSION

This year, the following challenges are held with HASCA.

Sussex-Huawei Locomotion Challenge 2024
http://www.shl-dataset.org/activity-recognition-challenge-2024/

WEAR Dataset Challenge
https://mariusbock.github.io/wear/challenge.html

CONTACT
hasca-organizer[at]ml.hasc.jp


Program

We're sorry for delay on opening this program.

HASCA Workshop will take place on Saturday, 5th Oct. at Victoria Suite (Room 2).

Presentation time:
HASCA oral presentation - 15 min (10-min talk + 5-min Q&A)
Other presentation - follow the timetable

08:00-09:00 Registration
09:00-10:30 Session 1: HASCA session[90min] (Chair: Tsuyoshi Okita)
  • Large Language Models for Generating Semantic Nursing Activity Logs: Exploiting Temporal and Contextual Information
    Nazmun Nahid (Kyushu Institute of Technology), Ryuya Munemoto (Kyushu Institute of Technology), Sozo Inoue (Kyushu Institute of Technology)
  • Synthetic Skeleton Data Generation using Large Language Model for Nurse Activity Recognition
    Umang Dobhal (Dronacharya Collge of Engineering), Christina Alvarez Garcia (Kyushu Institute of Technology), Sozo Inoue (Kyushu Institute of Technology)
  • Initial Investigation of Kolmogorov-Arnold Networks (KANs) as Feature Extractors for IMU Based Human Activity Recognition
    Mengxi Liu (German Research Center for Artificial Intelligence), Daniel Geißler (German Research Center for Artificial Intelligence), Dominique Nshimyimana (DFKI), Sizhen Bian (ETH Zürich), Bo Zhou (German Research Center for Artificial Intelligence), Paul Lukowicz (DFKI)
  • PrISM: Unified Framework for Task Assistants powered by Multimodal Human Activity Recognition
    Riku Arakawa (Carnegie Mellon University), Mayank Goel (Carnegie Mellon University)
  • DNN Model Comparison for Sensor Location Robustness
    Yu Enokibori (Nagoya University), Takahiro Saito (Nagoya University), Kenji Mase (Nagoya University)
  • Emotion Recognition on the Go: Utilizing Wearable IMUs for Personalized Emotion Recognition
    Zikang Leng (Georgia Institute of Technology), Myeongul Jung (Hanyang University), Sungjin Hwang (Hanyang University), Seungwoo Oh (Hanyang University), Lizhe Zhang (Georgia Institute of Technology), Thomas Ploetz (Georgia Institute of Technology), Kwanguk Kim (Hanyang University)
10:30-11:00 Coffee Break
11:00-12:30 Session 2: HASCA session[30min] + WEAR sesion[60min] (Chair: Marius Bock)
HASCA
  • Diffusion Model-based Classifier for Human Activity Recognition
    Kosuke Ukita (Kyushu Institute of Technology), Tsuyoshi Okita (Kyushu Institute of Technology)
  • Game of LLMs: Discovering Structural Constructs in Activities using Large Language Models
    Shruthi Kashinath Hiremath (Georgia Institute of Technology), Thomas Ploetz (Georgia Institute of Technology),
WEAR
  • Introduction and Challenge Overview
    Marius Bock
  • TA-DA! - Improving Activity Recognition using Temporal Adapters and Data Augmentation
    Maximilian Hopp (University of Siegen), Helge Hartleb (University of Siegen), Robin Burchard (University of Siegen)
  • Left-Right Swapping and Upper-Lower Limb Pairing for Robust Multi-Wearable Workout Activity Detection
    Jonas Van Der Donckt (Ghent University), Jeroen Van Der Donckt (Ghent University - imec), Sofie Van Hoecke (Ghent University - imec)
  • Augmentation Approaches to Refine Wearable Human Activity Recognition
    Somesh Salunkhe (University of Siegen), Shubham Pradeep Shinde (University of Siegen), Pradnyesh Patil (University of Siegen), Robin Burchard (University of Siegen)
  • Results & Winners Ceremony
12:30-14:00 Lunch Break
14:00-15:30 Session 3: SHL session[90min] (Chair: Mathias Ciliberto, Kazuya Murao, Lin Wang)
  • Summary talk [15 min]
  • Paper 1 [12 min]
  • Paper 2 [12 min]
  • Paper 3 [12 min]
  • Ceremony [5 min]
  • Poster [10 min]
15:30-16:00 Coffee Break with SHL poster (cont'd)
16:00-17:30 Session 4: HASCA session[90min] (Chair: Yu Enokibori)
  • Water Level Recognition by Analyzing the Sound when Pouring Water
    Atsuhiro Fujii (Ritsumeikan University), Kazuya Murao (Ritsumeikan University)
  • A System to Visualize Differences in Paddling Timing between Teammates in Rowing
    Daiki Takahashi (Ritsumeikan University), Kazuya Murao (Ritsumeikan University)
  • A Monocular Fisheye Video-Based 2D to 3D Pose Lift Technique with Multiperson Spatial Context Integration
    Iqbal Hassan (Kyushu Institute of Technology), Nazmun Nahid (Kyushu Institute of Technology), Sozo Inoue (Kyushu Institute of Technology)
  • Composite Image Generation Using Labeled Segments for Pattern-Rich Dataset without Unannotated Target
    Kazuma Kano (Nagoya University), Yuki Mori (Nagoya University), Keisuke Higashiura (Nagoya University), Tahera Hossain (Nagoya University), Shin Katayama (Nagoya University), Kenta Urano (Nagoya University), Takuro Yonezawa (Nagoya University), Nobuo Kawaguchi (Nagoya University)
  • User Authentication Method for Smart Glasses using Gaze Information of Registered Known Images and AI-generated Unknown Images
    Masaya Inoue (Ritsumeikan University), Kazuya Murao (Ritsumeikan University)
  • Face Recognition Reinforcement using Pulse Waves of Front Camera Face Image and Rear Camera Finger Image on a Smartphone
    Taiki Yuma (Ritsumeikan University), Kazuya Murao (Ritsumeikan University)
17:30- Closing

Welcome to HASCA2024

Welcome to HASCA2024 Web site!

HASCA2024 is an 12th International Workshop on Human Activity Sensing Corpus and Applications. The workshop will be held in conjunction with UbiComp/ISWC2024.

Important Dates
Submission Deadline: Jun. 7 Jun. 14 (extended)
Acceptance Notification: Jul. 5
Camera-ready: Jul. 19
Workshop: Oct. 5, 2024 at Melbourne, Australia

Challenges

Following challenges are held with HASCA 2024.
Please refer to each challenge website for details including rules and deadlines.

Sussex-Huawei Locomotion Challenge 2024
http://www.shl-dataset.org/activity-recognition-challenge-2024/

WEAR Dataset Challenge
https://mariusbock.github.io/wear/challenge.html

Abstract

The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpora and improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition. The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence.

The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence. We expect the following domains to be relevant contributions to this workshop (but not limited to):

Data collection / Corpus construction

Experiences or reports from data collection and/or corpus construction projects, such as papers describing the formats, styles or methodologies for data collection. Cloud- sourcing data collection or participatory sensing also could be included in this topic.

Effectiveness of Data / Data Centric Research

There is a field of research based on the collected corpus, which is called “Data Centric Research”. Also, we solicit of the experience of using large-scale human activity sensing corpus. Using large-scape corpus with machine learning, there will be a large space for improving the performance of recognition results.

Tools and Algorithms for Activity Recognition

If we have appropriate and suitable tools for management of sensor data, activity recognition researchers could be more focused on their research theme. However, development of tools or algorithms for sharing among the research community is not much appreciated. In this workshop, we solicit development reports of tools and algorithms for forwarding the community.

Real World Application and Experiences

Activity recognition "in the Lab" usually works well. However, it is not true in the real world. In this workshop, we also solicit the experiences from real world applications. There is a huge gap/valley between "Lab Envi- ronment" and "Real World Environment". Large scale human activity sensing corpus will help to overcome this gap/valley.

Sensing Devices and Systems

Data collection is not only performed by the "off the shelf" sensors. There is a requirement to develop some special devices to obtain some sort of information. There is also a research area about the development or evaluate the system or technologies for data collection.

Mobile experience sampling, experience sampling strategies:

Advances in experience sampling ap- proaches, for instance intelligently querying the user or using novel devices (e.g. smartwatches) are likely to play an important role to provide user-contributed annotations of their own activities.

Unsupervised pattern discovery

Discovering mean- ingful repeating patterns in sensor data can be fundamental in informing other elements of a system generating an activity corpus, such as inquiring user or triggering annotation crowd sourcing.

Dataset acquisition and annotation through crowd-sourcing, web-mining

A wide abundance of sensor data is potentially in reach with users instrumented with their mobile phones and other wearables. Capitalizing on crowd-sourcing to create larger datasets in a cost effective manner may be critical to open-ended activity recognition. Online datasets could also be used to bootstrap recognition models.

Transfer learning, semi-supervised learning, lifelong learning

The ability to translate recognition mod- els across modalities or to use minimal supervision would allow to reuse datasets across domains and reduce the costs of acquiring annotations.


プロフィール

エントリーリスト

カテゴリーリスト