Alexander Strachan and Nigel Topham, School of Informatics, University of Edinburgh, Edinburgh, Scotland, EH8 9AB
Current methods of implementing wireless radio typically take one of two forms; either dedicated fixed-function hardware, or pure Software Defined Radio (SDR). Fixed function hardware is efficient, but being specific to each radio standard it lacks flexibility, whereas Software Defined Radio is highly flexible but requires powerful processors to meet real-time performance constraints. This paper presents a hybrid hardware/software approach that aims to combine the flexibility of SDR with the efficiency of dedicated hardware solutions. We evaluate this approach by simulating five variants of the IEEE 802.15.4 protocol, commonly known as Zigbee, and demonstrate the range of performance and power consumption characteristics for different accelerator and software configurations. Across the spectrum of configurations we see power consumption varies from 8% to 38% of a dedicated hardware implementation, and show how the hybrid approach allows a new modulation standard to be retrofitted to an existing design, with only a modest increase in power consumption.
Wireless Radio, Digital Signal Processing, Embedded Systems, Computer Architecture, Accelerators.
Mauricio Figueroa Colarte, School of Informatics and Telecommunications, Fundación Instituto Profesional DUOC UC, Viña del Mar, Chile
In the Chilean context, Long Stay Establishments for the Elderly (ELEAM) face significant challenges in comprehensive care and prevention of falls, critical incidents for this population. This project, called "ELEAM@TIC", explores the incorporation of sensor-based technology as an innovative strategy to address these problems. Through a multidisciplinary approach, the research team, led by Mauricio Figueroa Colarte, evaluated the effectiveness and acceptability of different types of sensors strategically placed on users. Preliminary results indicate that technical aspects must be improved for a notable improvement in early risk detection and response to fall incidents, suggesting significant potential to improve the quality of life of older adults in ELEAM. This project lays the foundation for future research and development in the field of inclusive technology and comprehensive care for the elderly.
Falls Detection, Wereable, Sensors, Older Adults, Inclusive Technology.
Prof. Salahddine KRIT, Lab.SIV/FSA, Department of Computer Science, FPO, Ibnou Zohr University, Agadir Morocco
The rapid development of Internet of Things (IoT) and Industry 4.0 technologies has revolutionized industries globally, transforming not only manufacturing and logistics but also financial markets. These technologies are creating new possibilities for data-driven trading strategies, offering unprecedented real-time insights that enable smarter, more efficient trading decisions. This article delves into the intersection of IoT, Industry 4.0, and trading, examining how these technologies are reshaping commodity markets, supply chains, algorithmic trading, and risk management. While the potential benefits are immense, challenges such as data overload, security risks, and technological fragmentation must be addressed. This paper provides a comprehensive overview of how IoT and Industry 4.0 are transforming the landscape of modern trading.
Internet of Things (IoT), Industry 4.0, algorithmic trading, real-time data, high-frequency trading, predictive analytics, blockchain, decentralized finance (DeFi), commodities trading, risk management, cybersecurity.
Tom Springer1 and Peiyi Zhao2, 1Fowler School of Engineering, Chapman University, Orange, CA., USA, 2Fowler School of Engineering, Chapman University, CA., USA
This paper details the implementation of a rate-based task scheduler into the VxWorks real-time operating system, intended to enhance resource allocation for distributed real-time systems, such as IoT and embedded devices. Rate-based scheduling dynamically adjusts task execution rates based on system demand, providing a flexible and efficient approach to meeting real-time constraints. The scheduler was integrated into VxWorks and evaluated using the Cheddar scheduling analysis tool and the VxWorks VxSim simulator. Initial results demonstrate improved deadline adherence and resource management under varying loads compared to traditional schedulers. Future work includes porting the scheduler to singleboard computers to assess its performance on resource-constrained IoT hardware and extending it to support resource sharing between tasks to address real-time coordination challenges. This research emphasizes the potential of rate-based scheduling for IoT applications, offering a scalable solution for managing the complexity of distributed, real-time environments in future embedded systems.
Real-Time systems, Networked Embedded Systems, Real-Time Operating Systems, Internet of Things Applications.
Salih Hamza Abuelyamen, Retired from the Central Bureau of Statistics in Sudan, Association of retired staff from the Central Bureau of Statistics - Sudan, Private Researcher
Because of memory lapse, social and other factors, direct questions on mortality status in demographic and health surveys or population censuses would not reveal accurate and complete information. Hence demographers used to apply indirect questions in data collection stage, and indirect techniques to estimate mortality indicators’ values from this data. One of the famous methods in this respect is Brass Combined Method to construct life tables by combination of child and adult survival data. To produce this information from surveys or censuses it takes a lot of time, in addition to that, the calculations include sophisticated equations using auxiliary information from different sources. This paper present computer integrated package to execute all stages of this job, starting from questionnaire design; data entry, data editing, data processing, calculation of child and adult mortality indicators and construction of life tables by this method. It is also designed to accept row data from different statistical censuses and surveys that include the required information.
Life table, Mortality, Adult, Child, Data entry.
CYRIL GRUNSPAN AND RICARDO PEREZ-MARCO
It has been known for some time that the Nakamoto consensus as implemented in the Bitcoin protocol is not totally aligned with the individual interests of the participants. More precisely, it has been shown that block withholding mining strategies can exploit the difficulty adjustment algorithm of the protocol and obtain an unfair advantage. However, we show that a modification of the difficulty adjustment formula taking into account orphan blocks makes honest mining the only optimal strategy. Surprinsingly, this is still true when orphan blocks are rewarded with an amount smaller to the official block reward. This gives an incentive to signal orphan blocks. The results are independent of the connectivity of the attacker.
Bitcoin, blockchain, proof-of-work, selfish mining, martingale.
Saikrishna Sanniboina, Shiv Trivedi and Sreenidhi Vijayaraghavan, University of Illinois at Urbana-Champaign, USA
Retrieval-based question answering systems often suffer from positional bias, leading to suboptimal answer generation. We propose LoRE (Logit-Ranked Retriever Ensemble), a novel approach that improves answer accuracy and relevance by mitigating positional bias. LoRE employs an ensemble of diverse retrievers, such as BM25 and sentence transformers with FAISS indexing. A key innovation is a logit-based answer ranking algorithm that combines the logit scores from a large language model (LLM), with the retrieval ranks of the passages. Experimental results on NarrativeQA, SQuAD demonstrate that LoRE significantly outperforms existing retrieval-based methods in terms of exact match and F1 scores. On SQuAD, LoRE achieves 14.5%, 22.83%, and 14.95% improvements over the baselines for ROUGE-L, EM, and F1, respectively. Qualitatively, LoRE generates more relevant and accurate answers, especially for complex queries.
Open-Domain Question Answering, Positional Bias, Sentence Transformers, Answer Ranking, Retrieval-Augmented Generation.
Armaan Agrawal, Princeton Day School Princeton, NJ, USA
In the evolving landscape of sustainable investing, environment, social, and governance (ESG) metrics are crucial for evaluating companies beyond financial performance. Recognizing the growing importance of ESG to stakeholders, companies release annual sustainability reports outlining their ESG goals and progress. This paper analyzes how Fortune 500 companies integrate ESG considerations into their operations and reporting. We extract the text from the sustainability reports, separate them into different sentences, classify them into nineteen ESG subcategories using a zero-shot learning model, and compare the determined ESG focuses to actual data to evaluate the authenticity and effectiveness of these reports. This examination unveils the current state of ESG compliance among leading corporations and provides insights into the challenges and successes of implementing sustainable practices. More importantly, this research aims to facilitate the process of analyzing lengthy and complex sustainability reports by offering a scalable and flexible approach through the use of zero-shot learning. By streamlining the analysis of these reports, this research contributes to a better understanding of corporate ESG efforts and their impact on a sustainable future.
ESG, NLP, Sustainability, Zero-Shot learning.
Lucas G. M. de Castro, Adriana L. Damian, and Celso B. Carvalho, Federal University of Amazonas, Brazil
This study focuses on developing a multimodal emotion recognition system for analyzing text, audio, and video data. We propose an advanced approach that integrates natural language processing and deep learning techniques, utilizing hierarchical attention mechanisms and cross-modal transformers to improve emotion detection accuracy. Our system achieved notable performance metrics, including a 90.8% accuracy and an 89.5% F1-score, surpassing existing state-of-the-art methods. These results demonstrate the system’s effectiveness in accurately identifying emotions and its potential application in enhancing human-computer interaction and sentiment analysis tools.
Multimodal Emotion Recognition, Natural Language Processing (NLP), Sentiment Analysis, Deep Learning, Hierarchical Attention Mechanisms, Audio-Visual Data Analysis
Abderaouf GACEM, Mohammed HADDAD, and Hamida SEBA Univ Lyon, UCBL, CNRS, INSA Lyon, LIRIS, UMR5205
Graph Convolutional Networks(GCNs)have recently gained significant attention due to the success of Convolutional Neural Networks in imageand language processing,as well as the prevalence of data that can be represented as graphs. However, GCNs are limited by the size of the graphs they can handle and by the oversmoothing problem, which can be caused by the depth or the large receptive field of these networks. Various approaches have been proposed to address these limitations. One promising approach involves considering the minibatch training paradigm and extending it to graph-structured data by extracting subgraphs and using them as batches. Unlike the entries in a dataset of images, which are independent from one another, the essence of a graph lies in its topology, hence the dependency between its nodes. Consequently, the strategy of selecting subgraphs to form minibatches is a challenging task with a significant impact on the training process results. In this work, we propose a general framework for generating minibatches in an effective way that ensures minimal loss of node interdependence information, preserves the original graph properties, and diversifies the samples for the GCN to improve generalization. We test our training process on real-world datasets with several well-known GCN models and demonstrate the improved results compared to existing methods.
Graph Convolutional Networks, Graph Sampling, Minibatch Training.
Jiasheng Wang1, Yu Sun2, 1Santa Margarita Catholic High School, 22062 Antonio Pkwy, Rancho Santa Margarita, CA 92688, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768
Lumigen is an innovative air quality monitoring system designed to enhance indoor environmental awareness using real-time data visualization [1]. The system combines an air quality sensor connected to a Raspberry Pi with a set of Philips Hue lights that change color based on detected air quality levels [2]. This setup provides immediate visual feedback, alerting users to air quality changes without requiring them to check a separate device. Users can interact with Lumigen through a mobile app that facilitates real-time monitoring, historical data analysis, and customization of air quality alerts and light settings [3]. Experimental evaluations demonstrate that Lumigen effectively detects and responds to variations in air quality, with a rapid response time and high accuracy. Unlike other solutions that may require separate displays or offer limited data insights, Lumigen seamlessly integrates into everyday life, providing both visual and data-driven cues about air quality. Future developments could enhance its portability, integrate automated responses with air purifiers, and offer advanced data analytics to further empower users to manage their indoor environments proactively [4].
Indoor Air Quality, Real-Time Data Visualization, Environmental Sensing, Smart Home Automation.
Am´erico Pereira1, 2, Pedro Carvalho1, 3, and Lu´ıs Cˆorte-Real1, 2, 1Centre for Telecommunications and Multimedia, INESC TEC, Porto, Portugal, 2Faculty of Engineering, University of Porto, Porto, Portugal, 3Polytechnic of Porto, School of Engineering, Porto, Portugal
Visual scene understanding is a fundamental task in computer vision that aims to extract meaningful information from visual data. It traditionally involves disjoint and specialized algorithms for different tasks that are tailored for specific application scenarios. This can be cumbersome when designing complex systems that include processing of visual and semantic data extracted from visual scenes, which is even more noticeable nowadays with the influx of applications for virtual or augmented reality. When designing a system that employs automatic visual scene understanding to enable a precise and semantically coherent description of the underlying scene, which can be used to fuel a visualization component with 3D virtual synthesis, the lack of flexibility and unified frameworks become more prominent. To alleviate this issue and its inherent problems, we propose an architecture that addresses the challenges of visual scene understanding and description towards a 3D virtual synthesis that enables an adaptable, unified and coherent solution. Furthermore, we expose how our proposition can be of use into multiple application areas. Additionally, we also present a proof of concept system that employs our architecture to further prove its usability in practice.
Visual Scene Understanding, Scene Understanding, 3D Reconstruction, Semantic Compression.
Nikitha Merilena Jonnada, PhD in Information Technology (Information Security Emphasis), University of the Cumberlands, Williamsburg, Kentucky, USA
In this paper, the authors discuss about the rise of wireless communications, if they are secure and safe, future of wireless industry, wireless communication security, protection methods and techniques that could help the organizations in establishing a secure wireless connection with their employees, and other factors that are important to learn and note when manufacturing, selling, or using the wireless networks and wireless communication systems.
Wireless, Network, Security, Hackers, VPN, IP address.