https://journals-sol.sbc.org.br/index.php/jisa/issue/feedJournal of Internet Services and Applications2024-08-28T18:18:31+00:00Carlos Alberto Kamienskicarlos.kamienski@ufabc.edu.brOpen Journal Systems<div id="aimsAndScope" class="cms-item placeholder placeholder-aimsAndScope"> <div class="placeholder-aimsAndScope_content"> <p>In a world moving rapidly online, and becoming more and more computer-dependent, the <em>Journal of Internet Services and Applications</em> (JISA) focuses on networking, communication, content distribution, security, scalability, and management on the Internet. Coverage focuses on recent advances in state-of-the-art of Internet-related Science and Technology.</p> <p>It is the wish of the JISA team that all quality articles will be published in the journal independent of the funding capacity of the authors. Thus, if the authors are unable to pay the APC charge, we recommend that they contact the editors. The JISA team will provide support to find alternative ways of funding. In particular, a grant from the Brazilian Internet Steering Committee helps sponsor the publication of many JISA articles.</p> </div> </div>https://journals-sol.sbc.org.br/index.php/jisa/article/view/3779A distributed computing model based on delegation of serverless microservices in a cloud-to-thing environment2024-04-08T15:09:23+00:00Antonio Silvaaassilva@inf.ufrgs.brPaulo Mendespaulo.mendes@airbus.comDenis Rosáriodenis@ufpa.brEduardo Cerqueiracerqueira@ufpa.brJoão Paulo J. da Costajoaopaulodacosta@hshl.deEdison P. de Freitasepfreitas@inf.ufrgs.br<p>The cloud-to-thing is a crucial enabler of 5G and 6G networks as it supports the requirements of new services, such as latency and bandwidth-critical ones, using the available infrastructure. With the advent of new networks deployed beyond the edge, such as vehicular and satellite networks, researchers have begun investigating solutions to support the cloud-to-thing continuum, in which services distribute logic across the network, and storage is decentralized between cloud, fog, and edge. This article discusses current computing models, highlighting the advantages of serverless-based models for the deployment and management of interdependent distributed computing functions, whose behavior can be redefined in real time. Our study leads to the proposal of a new serverless-cloud-to-thing model able of delegating, executing and adapting serverless microservices in a cloud-to-thing continuum based on software-defined networking and information-centric networking concepts.</p>2024-08-03T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3805Using Electric Vehicle Driver’s Driving Mode for Trip Planning and Routing2024-07-03T11:12:59+00:00Marcelo dos-Reismarcelo@sisc.com.brCelso Iwata Frisoncelso@pucpcaldas.brFabiano Costa Teixeirateixeira@pucpcaldas.brHumberto Torres Marques-Netohumberto@pucminas.br<p>With the increasing adoption of electric vehicles worldwide, some limitations have emerged in their usage. The main limitations include low autonomy and a scarcity of charging points. In this work, we describe a software architecture for planning a stop at charging stations along a trip, by prediction of battery charge to be spent along the path. We describe the main components of this architecture and evaluate regression methods for the car consumption prediction module. We also use a real dataset built from an electric vehicle usage to validate the architecture concept and its viability analyzing multiple linear regression machine learning models. To further validate the architecture, we make comparisons between simulated and a real trips.</p>2024-09-22T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3817Mapping High Risk Drinking Locations from Different Clustering Methods2024-08-28T18:18:31+00:00João Augusto dos Santos Silvajoao.silva.452811@sga.pucminas.brFelipe D. da Cunhafelipe@pucminas.comSilvio Jamil F. Guimarãessjamil@pucminas.br<p>Over the years, there has been a significant increase in the prevalence of diseases associated with the misuse of alcoholic beverages, resulting in three million annual deaths worldwide. Despite this alarming trend, there is a lack of dedicated applications to support individuals in their recovery from alcohol abuse. In light of this situation, the literature presents machine learning techniques that can be employed to identify and characterize urban areas with a high propensity for alcohol consumption in major cities. This study explores the utilization of Location-Based Social Networks (LBSN) to assess alcohol consumption habits in Tokyo and New York. Data from check-ins at bars and restaurants were collected, and through clustering methods, the study examined the drinking patterns of urban residents. The findings revealed that, while there were cultural variations in drinking behaviors between the two cities, users tended to consume more alcohol during weekends and nighttime. Furthermore, the research successfully pinpointed the regions most conducive to such consumption.</p>2024-11-21T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3826ATHENA-FL: Avoiding Statistical Heterogeneity with One-versus-All in Federated Learning2024-04-22T10:53:09+00:00Lucas Airam C. de Souzalucas.airam@coppe.ufrj.brGustavo F. Camilofranco@gta.ufrj.brGabriel Antonio F. Rebellogabriel@gta.ufrj.brMatteo Sammarcomatteo.sammarco@stellantis.comMiguel Elias M. Campistamiguel@gta.ufrj.brLuís Henrique M. K. Costaluish@gta.ufrj.br<p>Federated learning (FL) is a distributed approach to train machine learning models without disclosing private data from participating clients to a central server. Nevertheless, FL training struggles to converge when clients have distinct data distributions, which leads to an increased training time and model prediction error. We propose ATHENA-FL, a federated learning system that considers clients with heterogeneous data distributions to generate accurate models in fewer training epochs than state-of-the-art approaches. ATHENA-FL reduces communication costs, providing an additional positive aspect for resource-constrained scenarios. ATHENA-FL mitigates data heterogeneity by introducing a preliminary step before training that clusters clients with similar data distribution. To handle that, we use the weights of a locally trained neural network used as a probe. The proposed system also uses the one-<em>versus</em>-all model to train one binary detector for each class in the cluster. Thus, clients can compose complex models combining multiple detectors. These detectors are shared with all participants through the system's database. We evaluate the clustering procedure using different layers from the neural network and verify that the last layer is sufficient to cluster the clients efficiently. The experiments show that using the last layer as input for the clustering algorithm transmits 99.68% fewer bytes to generate clusters compared to using all the neural network weights. Finally, our results show that ATHENA-FL correctly identifies samples, achieving up to 10.9% higher accuracy than traditional training. Furthermore, ATHENA-FL achieves lower training communication costs compared with MobileNet architecture, reducing the number of transmitted bytes between 25% and 97% across evaluated scenarios.</p>2024-08-14T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3834Uncovering Hidden Risks in IoT devices: A Post-Pandemic National Study of SOHO Wi-Fi Router Security2024-07-25T14:57:54+00:00Osmany Freitasosmany@ita.brFrançoa Taffareltaffarel@ita.brAldri Luiz dos Santosaldri@dcc.ufmg.brLourenço Alves Pereira Júniorljr@ita.br<p>This study thoroughly analyzes the cybersecurity status of Small Office/Home Office (SOHO) Wi-Fi routers. These routers are crucial but frequently overlooked elements in network infrastructure, particularly in light of the impact of the COVID-19 pandemic on network security. The pandemic has led to shifts in network usage patterns, blurring traditional security perimeters and extending them into private residences, creating additional points of vulnerability in urban environments. Our nationwide research evaluated an extensive dataset of router brands and models currently used at scale. We measured the prevalence of known vulnerabilities, assessed the currency of userspace and kernel software versions, and compared the security robustness of proprietary firmware against open-source alternatives. Our findings reveal a concerning landscape of widespread vulnerabilities and outdated software components, posing latent risks to end-users. The results indicate a predominance of Linux on MIPS and ARM architectures, with an average delay of 5 to 10 years between the release of the kernel and the implementation of the most recent firmware versions. As a result, we observed an average of 1344 and 72 vulnerabilities in the kernel and applications. One significant discovery from our research is that replacing the manufacturer's original firmware with open-source alternatives, such as DD-WRT, OpenWrt, and Tomato, can substantially enhance the security of the software stack. This enhancement results in improvements of up to 97% in the case of binaries and 98.42% in the kernel. Our research helps increase cybersecurity awareness by pinpointing critical home network environment weaknesses and alerting the need for more rigorous security practices in producing and maintaining SOHO routers. This investigation also allowed the report of a new remote code execution vulnerability (disclosed in CVE-2022-46552).</p>2024-10-16T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3851Customer segmentation in e-commerce: a context-aware quality model for comparing clustering algorithms2024-04-28T10:25:04+00:00Adam Wasilewskiadam@wasilewscy.pl<p>E-commerce platforms are constantly evolving to meet the ever-changing needs and preferences of online shoppers. One of the ways that is gaining popularity and leading to a more personalised and efficient user experience is through the use of clustering techniques. However, the choice between clustering algorithms should be made based on specific business context, project requirements, data characteristics, and computational resources. The purpose of this paper was to present a quality framework that allows the comparison of different clustering approaches, taking into account the business context of the application of the results obtained. The validation of the proposed approach was carried out by comparing three methods - K-means, K-medians, and BIRCH. One possible application of the generated clusters is a platform to support multiple variants of the e-commerce user interface, which requires the selection of an optimal algorithm based on different quality criteria. The contribution of the paper includes the proposal of a framework that takes into account the business context of e-commerce customer clustering and its practical validation. The results obtained confirmed that the clustering techniques analysed can differ significantly when analysing e-commerce customer behaviour data. The quality framework presented in this paper is a flexible approach that can be developed and adapted to the specifics of different e-commerce systems.</p>2024-07-25T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3882COVID-19 Mobile Applications: A Study of Trackers and Data Leaks2024-03-11T11:43:17+00:00Nicolás Serranonicolas.serrano@fing.edu.uyGustavo Betartegustun@fing.edu.uyJuan Diego Campojdcampo@fing.edu.uy<p>The emergence of COVID-19 in 2019 had a profound international impact. Technologically, governments and significant organizations responded by spearheading the development of mobile applications to aid citizens in navigating the challenges posed by the pandemic. While many of these applications proved successful in their intended purpose, the safeguarding of user privacy was not consistently prioritized, revealing a prevalent use of third-party libraries commonly referred to as trackers. In our comprehensive analysis encompassing 595 Android applications, we uncovered trackers in 402 of them, leading to the inadvertent exposure of sensitive user information and device data on external servers. Our investigation delved into the methodologies employed by these trackers to harvest and exfiltrate information. Furthermore, we examined the positions adopted by both trackers and governments. This study underscores the critical need for a reevaluation of the inclusion of trackers in applications of such sensitivity. Recognizing the potential lack of awareness within the scrutinized organizations regarding the risks associated with integrating third-party libraries, particularly trackers, we introduce SAPITO as part of our contributions. SAPITO is an open-source tool designed to identify potential leaks of sensitive data by third-party libraries in Android applications, providing a valuable resource for enhancing the security and privacy measures of mobile applications in the face of evolving technological challenges.</p>2024-07-24T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3887Towards a Framework to Evaluate Generative Time Series Models for Mobility Data Features2024-04-29T21:11:33+00:00Iran F. Ribeiroiran.ribeiro@edu.ufes.brGiovanni Comarelagc@inf.ufes.brAntonio A. A. Rochaaaarocha@id.uff.brVinícius F. S. Motavinicius.mota@inf.ufes.br<p>Understanding human mobility has implications for several areas, such as immigration, disease control, mobile networks performance, and urban planning. However, gathering and disseminating mobility data face challenges such as data collection, handling of missing information, and privacy protection. An alternative to tackle these problems consists of modeling raw data to generate synthetic data, preserving its characteristics while maintaining its privacy. Thus, we propose MobDeep, a unified framework to compare and evaluate generative models of time series based on mobility data features, which considers statistical and deep learning-based modeling. To achieve its goal, MobDeep receives as input statistical or Generative Adversarial Network-based models (GANs) and the raw mobility data, and outputs synthetic data and the metrics comparing the synthetic with the original data. In such way, MobDeep allows evaluating synthetic datasets through qualitative and quantitative metrics. As a proof-of-concept, MobDeep implements one classical statistical model (ARIMA) and three GANs models. To demonstrate MobDeep on distinct mobility scenarios, we considered an open dataset containing information about bicycle rentals in US cities and a private dataset containing information about a Brazilian metropolis's urban traffic. MobDeep allows observing how each model performs in specific scenarios, depending on the characteristics of the mobility data. Therefore, by using MobDeep researchers can evaluate their resulting models, improving the fidelity of the synthetic data regarding the original dataset.</p>2024-08-11T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3891Reducing Persistence Overhead in Parallel State Machine Replication through Time-Phased Partitioned Checkpoint2024-03-11T11:45:24+00:00Everaldo Gomes Jr.everaldogjr@gmail.comEduardo Alchierialchieri@unb.brFernando Dottifernando.dotti@pucrs.brOdorico Machado Mendizabalodorico.mendizabal@ufsc.br<p lang="en-US" style="margin-bottom: 0in; line-height: 100%;">Dependable systems usually rely on replication to provide resilience and availability. However, for long-lived systems, replication is not enough since given a sufficient amount of time, there might be more faulty replicas than the threshold tolerated in the system. In order to overcome this limitation, checkpoint and recovery techniques are used to update and resume failed replicas. In this sense, checkpointing procedures periodically capture snapshots of the system state during failure-free execution, enabling recovery processes to resume from a previously stored and consistent state. Nevertheless, saving checkpoints introduces overhead, requiring synchronization with the processing of incoming requests to prevent inconsistencies. <br />This overhead becomes even more pronounced in high-throughput systems like Parallel State Machine Replication, where workloads dominated by independent requests leverage multi-threading parallelism. <br />This work addresses the costly nature of checkpointing by proposing a novel approach that divides the replica's state into partitions and takes snapshots of only a few partitions at a time. Replicas continue executing requests targeted to other partitions without interruption. Thus, incoming requests experience delays during a checkpoint only if they access a partition currently being saved. Combining this approach with the Parallel State Machine Replication yields reduced snapshot durations and lower client latency during checkpointing. Additionally, the proposed approach accelerates replicas recovery through collaborative state transfer, enabling workload distribution among replicas and parallel execution of transfer and installation of the recovering state.</p>2024-07-26T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3897Anomaly Detection in Sound Activity with Generative Adversarial Network Models2024-05-04T15:18:30+00:00Wilson A. de Oliveira Netowilson.oliveira@icomp.ufam.edu.brElloá B. Guedesebgcosta@uea.edu.brCarlos Maurício S. Figueiredocfigueiredo@uea.edu.br<p>In state-of-art anomaly detection research, prevailing methodologies predominantly employ Generative Adversarial Networks and Autoencoders for image-based applications. Despite the efficacy demonstrated in the visual domain, there remains a notable dearth of studies showcasing the application of these architectures in anomaly detection within the sound domain. This paper introduces tailored adaptations of cutting-edge architectures for anomaly detection in audio and conducts a comprehensive comparative analysis to substantiate the viability of this novel approach. The evaluation is performed on the DCASE 2020 dataset, encompassing over 180 hours of industrial machinery sound recordings. Our results indicate superior anomaly classification, with an average Area Under the Curve (AUC) of 88.16% and partial AUC of 78.05%, surpassing the performance of established baselines. This study not only extends the applicability of advanced architectures to the audio domain but also establishes their effectiveness in the challenging context of industrial sound anomaly detection.</p>2024-09-05T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3902A process mining-based method for attacker profiling using the MITRE ATT&CK taxonomy2024-03-11T11:47:55+00:00Marcelo Rodríguezmarcelor@fing.edu.uyGustavo Betartegustun@fing.edu.uyDaniel Calegaridcalegar@fing.edu.uy<p><span dir="ltr" role="presentation">Cybersecurity intelligence involves gathering and analyzing data to understand cyber adversaries’ capa</span><span dir="ltr" role="presentation">bilities, intentions, and behaviors to establish adequate security measures. The MITRE ATT&CK framework is </span><span dir="ltr" role="presentation">valuable for gaining insight into cyber threats since it details attacker tactics, techniques, and procedures. However, </span><span dir="ltr" role="presentation">to fully understand an attacker’s behavior, it is necessary to connect individual tactics.</span> <span dir="ltr" role="presentation">In this context, Process </span><span dir="ltr" role="presentation">Mining (PM) can be used to analyze runtime events from information systems, thereby discovering causal relations </span><span dir="ltr" role="presentation">between those events. </span><span dir="ltr" role="presentation">This article presents a novel approach combining Process Mining with the MITRE ATT&CK framework to dis</span><span dir="ltr" role="presentation">cover process models of different attack strategies.</span> <span dir="ltr" role="presentation">Our approach involves mapping low-level system events to </span><span dir="ltr" role="presentation">corresponding event labels from the MITRE ATT&CK taxonomy, increasing the abstraction level for attacker pro</span><span dir="ltr" role="presentation">filing. We demonstrate the effectiveness of our approach using real datasets of human and automated (malware) </span><span dir="ltr" role="presentation">behavior. This exploration helps to develop more efficient and adaptable security strategies to combat current cyber </span><span dir="ltr" role="presentation">threats and provides valuable guidelines for future research.</span></p>2024-08-01T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3905Interoperable node integrity verification for confidential machines based on AMD SEV-SNP2024-03-11T11:58:30+00:00Davi Pontesdavi.pontes@lsd.ufcg.edu.brFernando Silvafernando.silva@lsd.ufcg.edu.brAnderson Meloanderson.melo@lsd.ufcg.edu.brEduardo Falcãoeduardo@dca.ufrn.brAndrey Britoandrey@computacao.ufcg.edu.br<p>Confidential virtual machines (CVMs) are cloud providers' most recent security offer, providing confidentiality and integrity features. Although confidentiality protects the machine from the host operating system, firmware, and cloud operators, integrity protection is even more useful, enabling protection for a wider range of security issues. Unfortunately, CVM integrity verification depends on remote attestation protocols, which are not trivial for operators and differ largely among cloud providers. We propose an approach for abstracting CVM attestation that leverages an open-source standard, Cloud Native Foundation's Secure Production Identity Framework for Everyone (SPIFFE). Our approach can integrate smoothly even when applications are unaware of CVMs or the SPIFFE standard. Nevertheless, our implementation inherits SPIFFE flexibility for empowering access control when applications support SPIFFE. In terms of performance, CVMs incur an additional 1.3 s to 21.9 s in boot times (it varies with the cloud environment), a marginal degradation for CPU, RAM, and IO workloads (maximum degradation of 2.6%), and low but not imperceptible degradation for database workloads (between 3.6% to 7.13%). Finally, we provide usability mechanisms and a threat analysis to help users navigate cloud providers' different CVM implementations and resulting guarantees.</p>2024-07-25T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3907Deep Learning Applied to Imbalanced Malware Datasets Classification2024-05-29T07:35:58+00:00Marcelo Palma Salasmarcelopalma@ic.unicamp.brPaulo Lício de Geuspgeus@unicamp.br<p>In the current day, the evolution and exponential proliferation of malware involve modifications and camouflage of their structure through techniques like obfuscation, polymorphism, metamorphism, and encryption. With the advancements in deep learning, methods such as convolutional neural networks (CNN) have emerged as potent tools for deciphering intricate patterns within this malicious software. The present research uses the capacity of CNN to learn the global structure of the code converted to an RGB or grayscale image and decipher the patterns present in the malware datasets generated from these images. The study explores fine-tuning techniques, including bicubic interpolation, ReduceLROnPlateau, and class weight estimation, in order to generalize the model and reduce the risk of overfitting for malware that uses evasion techniques against classification. Taking advantage of transfer learning and the MobileNet architecture, we created a MobileNet fine-tuning (FT) model. The application of this new model in four datasets, including Microsoft Big 2015, Malimg, MaleVis, and a new Fusion dataset, achieved 98.71%, 99.08%, 96.04%, and 98.04% accuracy, respectively, which underscores the robustness of the proposed model. The Fusion dataset is a combination of the first three datasets, consisting of a set of 32,601 known malware image files representing a mix of 59 different families. Despite the success, the study reveals performance deterioration with an increase in the number of malware families, highlighting the need for further exploration into the limits of CNNs in malware classification.</p>2024-09-16T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3914OneTrack - An End2End approach to enhance MOT with Transformers2024-05-30T18:21:47+00:00Luiz Araujoluiz.clssss.a@gmail.comCarlos Figueiredocfigueiredo@uea.edu.br<p>This paper introduces OneTrack, an innovative end-to-end transformer-based model for Multiple Object Tracking (MOT), focusing on enhancing efficiency without significantly compromising accuracy. Addressing the challenges inherent in MOT, such as occlusions, varied object sizes, and motion prediction, OneTrack leverages the power of vision transformers and attention layers, optimizing them for real-time applications. Utilizing a unique Object Sequence Patch Input and a Vision Transformer Encoder, the model simplifies the standard transformer approach by employing only the encoder component, significantly reducing computational costs. This approach is validated using the MOT17 dataset, a benchmark in the field, ensuring a comprehensive evaluation against established metrics like MOTA, HOTA, and IDF1. The experimental results demonstrate OneTrack's capability to outperform other transformer-based models in inference speed, with a marginal trade-off in accuracy metrics. The model's inherent design limitations, such as a maximum of 100 objects per window, are adjustable to suit specific applications, offering flexibility in various scenarios. The conclusion highlights the model's potential as a lightweight solution for MOT tasks, suggesting future work directions that include exploring alternative data representations and encoders, and developing a dedicated loss function to further enhance detection and tracking capabilities. OneTrack presents a promising step towards efficient and effective MOT solutions, catering to the demands of real-time applications.</p>2024-09-02T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3965Micro-Chain: A Cluster Architecture for Managing NDN Microservices2024-07-13T20:43:56+00:00Otávio A. R. da Cruzotavio.augusto@ufrgs.brAntonio A. S. da Silvaaassilva@inf.ufrgs.brPaulo Milheiro Mendespaulo.mendes@airbus.comDenis L. do Rosáriodenis@ufpa.brEduardo C. Cerqueiracerqueira@ufpa.brJulio C. S. dos Anjosjcsanjos@ufc.brCarlos E. Pereiracpereira@ece.ufrgs.brEdison P. de Freitasedison.pignaton@ufrgs.br<p>Network Functions Virtualization (NFV) and Information-Centric Networking (ICN) are promising networking paradigms for the future of the Internet. Concurrently, microservice architecture offers an attractive alternative to monolithic architecture for software development. This work addresses a scenario composed of these concepts, where an ICN network must be deployed and managed using ICN microservices. In this scenario, ICN microservices must be created, connected, configured, and monitored at runtime, which is not trivial. To address these challenges, this work proposes Micro-Chain, an architecture for deploying, scaling, and linking ICN microservices. The architecture consists of four modules, relationships between them, and core operations. A Micro-Chain implementation is presented as proof of concept, which has a threshold-based scaling process and a placement method to minimize the number of hops for an ICN microservice chain. The evaluation assesses a scale-on-demand scenario in a cluster with three nodes. The results demonstrate that 1) the developed solution can scale on demand, 2) the communication overhead is 0.632%, and 3) the placement of microservices affects network performance.</p>2024-10-03T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4006The Impact of Federated Learning on Urban Computing2024-05-30T18:19:44+00:00José R. F. Souzafsjoseroberto@gmail.comShéridan Z. L. N. Oliveirasheridan.oliveira@aluno.ufabc.edu.brHelder Oliveirahelder.oliveira@ufabc.edu.br<p>In an era defined by rapid urbanization and technological advancements, this article provides a comprehensive examination of the transformative influence of Federated Learning (FL) on Urban Computing (UC), addressing key advancements, challenges, and contributions to the existing literature. By integrating FL into urban environments, this study explores its potential to revolutionize data processing, enhance privacy, and optimize urban applications. We delineate the benefits and challenges of FL implementation, offering insights into its effectiveness in domains such as transportation, healthcare, and infrastructure. Additionally, we highlight persistent challenges including scalability, bias mitigation, and ethical considerations. By pointing towards promising future directions such as advancements in edge computing, ethical transparency, and continual learning models, we underscore opportunities to enhance further the positive impact of FL in shaping more adaptable urban environments.</p>2024-09-21T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4023The Intersection of the Internet of Things and Smart Cities: A Tertiary Study2024-05-03T20:31:09+00:00Rebeca C. Mottarmotta@cos.ufrj.brThais V. Batistathaisbatista@gmail.comFlavia C. Delicatofdelicato@gmail.com<p>Since the transition from an agricultural to an industrial economy, cities have attracted large masses of people in search of their facilities. Cities are symbols of progress and opportunities. However, urbanization also brings with it several problems and challenges. Smart Cities (SC) offer a way to address these challenges by using technology to make cities more efficient, sustainable, and livable. There are numerous technologies that enable the concept of smart cities, including the Internet of Things (IoT). The IoT provides the fundamental sensing infrastructure that allows connecting and virtualizing the physical world, extracting environmental variables that serve as initial inputs for decision-making processes. Such processes are provided by software systems whose construction and execution need to deal with the dynamism, heterogeneity and often serendipitous nature that permeate both the domains of smart cities and IoT. As the integration of IoT and Smart Cities paradigms is still at an early stage, and there are not yet holistic solutions to explore the full potential of such a synergy, we carried out a literature review on the topic. In particular, the objective of this article is to describe the results of a structured literature review to identify general concepts about quality attributes, applications, technologies, and challenges of IoT solutions applied to the SC domain. Our main goal is to assist in understanding the basic concepts of the research area through the search for secondary studies. This review is a tertiary study covering 17 reviews and aims to promote a high-level discussion on the identified characteristics and provide an overview of the area to promote a better perception of current development needs and opportunities.</p>2024-09-13T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4026Resource Allocation Based on Task Priority and Resource Consumption in Edge Computing2024-05-04T00:46:04+00:00Guilherme Alves Araújoguialves@alu.ufc.brSandy Ferreira da Costa Bezerrasandycosta@alu.ufc.brAtslands Rego da Rochaatslands@ufc.br<p>The computational power of Internet of Things (IoT) devices is usually low, which makes it necessary to process data and extract relevant information on devices with higher processing capacity. Edge Computing emerged as a complementary solution to cloud computing, providing devices at the network edge with computational resources to handle the data processing and analysis that constrained IoT devices eventually cannot perform. This solution allows data processing closer to the IoT devices, reducing latency for IoT applications. However, the resource constraints of edge nodes, which have lower computational power than the cloud nodes, make resource allocation and processing massive requests challenging. This study proposes an edge resource allocation mechanism based on task priority and machine learning. The proposed approach efficiently allocates resources for IoT requests based on their task priorities while monitoring the resource consumption of edge nodes. This study evaluates the performance of different classification algorithms by using well-known metrics for classifying models. The most efficient classifier achieved an accuracy of 92% and a precision of 90%. The results indicate good performance when using this classifier in the evaluated approach. The proposed mechanism demonstrated that resource management can be done more efficiently with significantly lower resource utilization when compared to an allocation method based only on distance. The study tested different scenarios regarding the number of requests, edge nodes, and a proposed failure mechanism to schedule failed node tasks to functional nodes. This failure control mechanism is a significant contribution of the proposal. Therefore, the proposed method in this study can become a valuable tool for efficient resource management with reduced computational cost and efficient resource allocation.</p>2024-09-16T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4036FramCo: Frame corrupted detection for the Open RAN intelligent controller to assist UAV-based mission-critical operations2024-05-15T13:29:48+00:00Ciro J. A. Macedociro.macedo@ifg.edu.brElton V. Diaseltondias@inf.ufg.brCristiano Bonato Bothcbboth@unisinos.brKleber Vieira Cardosokleber@ufg.br<p><span dir="ltr" style="left: 112.736px; top: 370.663px; font-size: 11.6433px; font-family: serif; transform: scaleX(1.00318);" role="presentation">Unmanned Aerial Vehicles (UAVs) and communication systems are fundamental elements in Mission </span><span dir="ltr" style="left: 65.4468px; top: 384.894px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.988024);" role="presentation">Critical services, such as Search and Rescue (SAR) operations. UAVs can fly over an area, collect high-resolution</span> <span dir="ltr" style="left: 65.156px; top: 399.123px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.965005);" role="presentation">video information, and transmit it back to a ground base station to identify victims through a Deep Neural Network </span><span dir="ltr" style="left: 65.4468px; top: 413.354px; font-size: 11.6433px; font-family: serif; transform: scaleX(1.00279);" role="presentation">object detection model. However, instabilities in the communication infrastructure can compromise SAR opera</span><span dir="ltr" style="left: 65.4468px; top: 427.585px; font-size: 11.6433px; font-family: serif; transform: scaleX(1.00014);" role="presentation">tions. For example, if one or more transmitted data packets fail to arrive at their destination, the high-resolution</span> <span dir="ltr" style="left: 65.156px; top: 441.815px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.998817);" role="presentation">video frames can be distorted, degrading the application performance. In this article, we explore the relevance of</span> <span dir="ltr" style="left: 65.4468px; top: 456.046px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.98718);" role="presentation">computer vision application information, complementing the functionalities of Radio Access Network Intelligent </span><span dir="ltr" style="left: 65.4468px; top: 470.277px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.996362);" role="presentation">Controllers for managing and orchestrating network components, through FramCo - a frame corrupted detection</span> <span dir="ltr" style="left: 65.4468px; top: 484.508px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.989367);" role="presentation">based on EfficientNet. Another contribution from this article is an architectural element that explores the compo</span><span dir="ltr" style="left: 65.4468px; top: 498.739px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.945099);" role="presentation">nents of the Open Radio Access Network (O-RAN) standard specification, with an assessment of a complex use case </span><span dir="ltr" style="left: 65.4468px; top: 512.969px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.970175);" role="presentation">that explores new market trends, such as SAR operations assisted by UAV-based computer vision. The experimen</span><span dir="ltr" style="left: 65.4468px; top: 527.2px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.971844);" role="presentation">tal results indicate that the proposed architectural element can act as an external trigger, integrated into the O-RAN </span><span dir="ltr" style="left: 65.4468px; top: 541.43px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.996261);" role="presentation">cognitive control loop, significantly improving the performance of applications with sensitive Key Performance </span><span dir="ltr" style="left: 65.4468px; top: 555.66px; font-size: 11.6433px; font-family: serif; transform: scaleX(0.958766);" role="presentation">Indicators (KPIs).</span></p>2024-07-18T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4041Evaluation of Trajectory and Destination Prediction Models: A Systematic Classification and Analysis of Methodologies and Recent Results2024-08-11T15:27:58+00:00João Batista Firmino Júniorfirminojunior83@gmail.comJanderson Ferreira Dutrajanderson.dutra@ifpb.edu.brFrancisco Dantas Nobre Netodantas.nobre@ifpb.edu.br<p>Predicting trajectories and destinations is of considerable relevance in the context of urban mobility, as it can be useful for suggesting detours, avoiding congestion, and optimizing people's commutes. Therefore, this research performs a classification and analysis of trajectory and destination prediction models in articles published from 2017 to 2023. These models were mapped considering: authors; the existence of more than one geographic scenario; the type of forecast; the use of semantic and contextual data; and description of the algorithms. The result consists of discussions of representative works, based on classification, with grouping of techniques. Furthermore, there is a focus on works that used contextual and/or semantic data, from which another framework was developed, specifying the titles of the articles, and whether the methodology involved the use of points or areas of interest, and a reference to how they were generated. This focus expands the previous framework, specifying the differences of a portion of published studies.</p>2024-10-08T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4044MESFLA: Model Efficiency through Selective Federated Learning Algorithm2024-08-09T15:36:26+00:00Alex Barrosdossantosab@gmail.comRafael Veigarafael.teixeira.silva@icen.ufpa.brRenan Moraisrenan.morais@itec.ufpa.brDenis Rosáriodenis@ufpa.brEduardo Cerqueiracerqueira@ufpa.br<p>Integrating big data and deep learning across various applications has significantly enhanced intelligence and efficiency in our daily lives. However, it also requires extensive data sharing, raising significant communication and privacy concerns. In this context, Federated Learning (FL) emerges as a promising solution to enable collaborative model training while preserving the privacy and autonomy of participating clients. FL facilitates collaborative model training by enabling data to be trained locally on devices, eliminating the need for individual information sharing among clients. A client selection mechanism strategically chooses a subset of participating clients to contribute to the model training in each learning round. However, an efficient selection of clients to participate in the training process directly impacts model convergence/accuracy and the overall communication load on the network. In addition, FL faces challenges when dealing with non-Independent and Non-Identically Distributed (non-IID) data, where the diversity in data distribution often leads to reduced classification accuracy. Hence, designing an efficient client selection mechanism in a scenario with non-IID data is essential, but it is still an open issue. This article proposes a Model Efficiency through Selective Federated Learning Algorithm called MESFLA. The mechanism employs a Centered Kernel Alignment (CKA) algorithm to search for similar models based on data weight or similarity between models, i.e., grouping participants with comparable data distributions or learning objectives. Afterward, MESFLA selects the clients with more relevance in each group based on data weight and entropy. Our comprehensive evaluation across multiple datasets, including MNIST, CIFAR-10, and CIFAR-100, demonstrates MESFLA's superior performance over traditional FL algorithms. Our results show an accuracy improvement and a minor loss in each client aggregation of the new global model sent to clients with a difference of 3 rounds using the Data Weight in comparison with the other selection methods.</p>2024-10-22T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4363Cybersecurity Testbeds for IoT: A Systematic Literature Review and Taxonomy2024-07-16T20:28:35+00:00Khalil G. Queiroz de Santanakhalil.santana@edu.univali.brMarcos Schwarzmarcos.schwarz@rnp.brMichelle Silva Wanghamwangham@univali.br<p>Researchers across the globe are carrying out numerous experiments related to cybersecurity, such as botnet dispersion, intrusion detection systems powered by machine learning, and others, to explore these topics in many different contexts and environmental settings. One current research topic is the behavior of Internet of Things (IoT) devices, as they increasingly become a common feature of homes, offices, and companies.. Network testing environments which are designated as testbeds, are boosting the effectiveness of network research. However, exploratory studies in IoT cybersecurity may include a wide range of requirements. This article seeks to carry out a survey of IoT cybersecurity testbeds. A critical systematic literature review was conducted to select relevant articles, by applying a novel taxonomy to classify the testbeds. The surveyed testbeds are classified in terms of their primary target domain and other features such as fidelity, heterogeneity, scalability, security, reproducibility, flexibility, and measurability. Furthermore, we have compared the testbeds with regard to each feature. Thus, the main contribution made by this study lies in a) the insights it provides into the state-of-the-art in IoT cybersecurity testbeds, and b) the emphasis laid on the main benefits and limitations that were found in the surveyed testbeds.</p>2024-10-04T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4454Virtualized 5G Tesbed using OpenAirInterface: Tutorial and Benchmarking Tests2024-08-20T12:03:58+00:00Matheus Dóriamatheus.fagundes.067@ufrn.edu.brVicente Sousavicente.sousa@ufrn.brAntonio Camposantonio.campos@ufrn.brNelson Oliveiranelson@imd.ufrn.brPaulo Eduardopaulo.eduardo.093@ufrn.edu.brPaulo Filhopaulo.filho.071@ufrn.edu.brCarlos Limacarlos.lima.106@ufrn.edu.brJoão Guilhermejoao.guilherme.016@ufrn.edu.brDaniel Lunadaniel.luna.088@ufrn.edu.brIago Regoiago.diogenes.072@ufrn.edu.brMarcelo Fernandesmfernandes@dca.ufrn.brAugusto Netoaugusto@dimap.ufrn.br<p>The development of 5G and its evolutionary path to 6G brings virtualization as close as possible to the antennas. Native 3GPP systems are now software running at servers boosted by accelerator cards to cope with computationally intense signal processing. Meanwhile, the Radio Frequency (RF) front-end is still proprietary hardware, sheltering specific PHY-layer procedures like passband amplification/modulation. This approach takes advantage of the software's flexibility while keeping the complex microsecond signal processing performance from modern telecommunication systems. This paper provides tutorial material on the Core Network and Radio Access Network of OpenAirInterface (OAI) 5G stack on top of Universal Software Radio Peripheral (USRP) platforms. A set of blueprints showcases OAI's ability to provide a virtualized 5G network with different transmission capabilities and the possibility to use it with commercial mobile phones. Configuration discussions and throughput benchmark analyses follow installation and deployment instructions. Our results show that 5G prototyping using OAI and USRP frontends can lead to good reproducibility and consistent throughput.</p>2024-10-29T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4509Enhancing Infrastructure Observability: Machine Learning for Proactive Monitoring and Anomaly Detection2024-08-01T20:03:56+00:00Darlan Noetzolddarlan.noetzold@gmail.comAnubis G. D. M. Rossettoanubisrossetto@ifsul.edu.brValderi R. Q. Leithardtvalderi.leithardt@gmail.comHumberto J. de M. Costahumberto.costa@veranopolis.ifrs.edu.br<p>This study addresses the critical challenge of proactive anomaly detection and efficient resource management in infrastructure observability. Introducing an innovative approach to infrastructure monitoring, this work integrates machine learning models into observability platforms to enhance real-time monitoring precision. Employing a microservices architecture, the proposed system facilitates swift and proactive anomaly detection, addressing the limitations of traditional monitoring methods that often fail to predict potential issues before they escalate. The core of this system lies in its predictive models that utilize Random Forest, Gradient Boosting, and Support Vector Machine algorithms to forecast crucial metric behaviors, such as CPU usage and memory allocation. The empirical results underscore the system's efficacy, with the GradientBoostingRegressor model achieving an R² score of 0.86 for predicting request rates, and the RandomForestRegressor model significantly reducing the Mean Squared Error by 2.06% for memory usage predictions compared to traditional monitoring methods. These findings not only demonstrate the potential of machine learning in enhancing observability but also pave the way for more resilient and adaptive infrastructure management.</p>2024-10-28T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3813Towards a Decentralized Blockchain-Based Resource Monitoring Solution For Distributed Environments2024-01-08T20:25:26+00:00Rodrigo B. Dos Passosrbpassos@inf.ufrgs.brKassiano J. Matteussikjmatteussi@inf.ufrgs.brJulio C. S. Dos Anjosjcsanjos@ufc.brClaudio F. R. Geyergeyer@inf.ufrgs.br<p>The increasing number of connected users and devices to Cloud, Fog, and Edge environments encouraged the creation of many applications and services in the most varied areas and domains. Such services are highly distributed on top of heterogeneous infrastructures that require real-time monitoring. The monitoring process may be considered a complex task since it requires experienced users and robust cloud-based solutions to support the most varied needs in such scenarios. The main problem relies on the centralization of the monitoring approaches for cloud-centric solutions that represent a central point of failure in end-to-end communication, compromising the application's security and performance in case of high latency or downtime. In this context, blockchain networks enable exciting features such as decentralization, immutability, and traceability with higher security levels. This work is towards a blockchain-based and decentralized resource monitoring solution for distributed environments. The proposed solution integrates blockchain technology to continuously monitor, store, and safely broadcast Operating System performance counters in a highly decentralized fashion. The results demonstrated that a blockchain-based monitoring tool based on Smart Contract is feasible and that it may serve as an entry point for varied solutions for monitoring, security, scheduling, and so on.</p>2024-03-07T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3634Hardware-Independent Embedded Firmware Architecture Framework2024-01-22T18:47:59+00:00Mauricio D. O. Farinamauriciofarina@icloud.comDaniel H. Pohrendaniel.pohren@ufrgs.brAlexandre dos S. Roqueale.roque@gmail.comAntonio Silvaaassilva@inf.ufrgs.brJoao Paulo J. da CostaJoaoPaulo.daCosta@hshl.deLisandra Manzoni Fontouralisandra@inf.ufsm.brJulio C. S. dos Anjosjcsanjos@ufc.brEdison Pignaton de Freitasedison.pignaton@ufrgs.br<div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>Unlike other forms of development, the way firmware development is designed is somewhat outdated. It is not unusual to come across whole systems implemented in a cross-dependent monolithic way. In addition, the software of many implementations is hardware-dependent. Hence, significant hardware changes may result in extensive firmware implementation reviews that can be time-consuming and lead to low-quality ports, which may represent an important problem for Internet of Things (IoT) applications that evolve very frequently. To address this problem, this study proposes an embedded firmware development framework that allows reuse and portability while improving the firmware development life cycle. In addition, the typical mistakes of a novice software developer can be reduced by employing this methodology. An embedded IoT system project was refactored for this framework model to validate this proposal. Finally, a comparison was made between a legacy and framework project to demonstrate that the proposed framework can make a substantial improvement in portability, reuse, modularity, and other firmware factors.</p> </div> </div> </div>2024-04-16T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3809MEDAVET: Traffic Vehicle Anomaly Detection Mechanism based on spatial and temporal structures in vehicle traffic2024-04-03T23:44:17+00:00Ana Rosalía Huamán Reynaarhuamanr@usp.brAlex Josué Flórez Farfánalex.josueff@usp.brGeraldo P. Rocha Filhogeraldo.rocha@uesb.edu.brSandra Sampaios.sampaio@manchester.ac.ukRobson de Granderdegrande@brocku.caLuis Hideo Vasconcelos Nakamuranakamura@icmc.usp.brRodolfo Ipolito Meneguettemeneguette@icmc.usp.br<p>Road traffic anomaly detection is vital for reducing the number of accidents and ensuring a more efficient and safer transportation system. In highways, where traffic volume and speed limits are high, anomaly detection is not only essential but also considerably more challenging, given the multitude of fast-moving vehicles, often observed from extended distances and diverse angles, occluded by other objects, and subjected to variations in illumination and adverse weather conditions. This complexity has meant that human error often limits anomaly detection, making the role of computer vision systems integral to its success. In light of these challenges, this paper introduces MEDAVET - a sophisticated computer vision system engineered with an innovative mechanism that leverages spatial and temporal structures for high-precision traffic anomaly detection on highways. MEDAVET is assessed in its object tracking and anomaly detection efficacy using the UA-DETRAC and Track 4 benchmarks and has its performance compared with that of an array of state-of-the-art systems. The results have shown that, when MEDAVET’s ability to delimit relevant areas of the highway, through a bipartite graph and the Convex Hull algorithm, is paired with its QuadTree-based spatial and temporal approaches for detecting occluded and stationary vehicles, it emerges as superior in precision, compared to its counterparts, and with a competitive computational efficiency.</p>2024-04-24T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3871Recovery of the secret on Binary Ring-LWE problem using random known bits - Extended Version2024-03-21T14:33:20+00:00Reynaldo Caceres Villenareynaldocv@gmail.comRouto Teradart@ime.usp.br<p>There are cryptographic systems that are secure against attacks by both quantum and classical computers. Some of these systems are based on the Binary Ring-LWE problem which is presumed to be difficult to solve even on a quantum computer. This problem is considered secure for IoT (Internet of things) devices with limited resources. In Binary Ring-LWE, a polynomial a is selected randomly and a polynomial b is calculated as b = a.s + e where the secret s and the noise e are polynomials with binary coefficients. The polynomials b and a are public and the secret s is hard to find. However, there are Side Channel Attacks that can be applied to retrieve some coefficients (random known bits) of s and e. In this work, we analyze that the secret s can be retrieved successfully having at least 50% of random known bits of s and e.</p>2024-04-29T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3810A Protocol for Solving Certificate Poisoning for the OpenPGP Keyserver Network2024-04-15T13:49:38+00:00Gunnar Wolfgwolf@gwolf.orgJorge Luis Ortega-Arjonajloa@ciencias.unam.mx<p>The OpenPGP encryption standard builds on a transitive trust distribution model for identity assertion, using a non-authenticated, distributed keyserver network for key distribution and discovery. An attack termed “certificate poisoning”, surfaced in 2019 and consisting in adding excessive trust signatures from inexistent actors to the victim key so that it is no longer usable, has endangered the continued operation of said keyserver network. In this article, we explore a protocol modification in the key acceptance and synchronization protocol termed <em>First-party attested third-party certification</em> that, without requiring the redeployment of updated client software, prevents the ill effects of certificate poisoning without breaking compatibility with the OpenPGP installed base. We also discuss some potential challenges and limitations of this approach, providing recommendations for its adoption.</p>2024-05-23T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3812Towards spatiotemporal integration of bus transit with data-driven approaches2024-04-03T23:45:42+00:00Júlio C. Borgesjulio.2018@alunos.utfpr.edu.brAltieris M. Peixotoaltieris.marcelino@gmail.comThiago H. Silvathiagoh@utfpr.edu.brAnelise Munarettoanelise@utfpr.edu.brRicardo Lüdersluders@utfpr.edu.br<p>This study aims to propose an approach for spatiotemporal integration of bus transit, which enables users to change bus lines by paying a single fare. This could increase bus transit efficiency and, consequently, help to make this mode of transportation more attractive. Usually, this strategy is allowed for a few hours in a non-restricted area; thus, certain walking distance areas behave like "virtual terminals". For that, two data-driven algorithms are proposed in this work. First, a new algorithm for detecting itineraries based on bus GPS data and the bus stop location. The proposed algorithm's results show that 90% of the database detected valid itineraries by excluding invalid markings and adding times at missing bus stops through temporal interpolation. Second, this study proposes a bus stop clustering algorithm to define suitable areas for these virtual terminals where it would be possible to make bus transfers outside the physical terminals. Using real-world origin-destination trips, the bus network, including clusters, can reduce traveled distances by up to 50%, at the expense of making twice as many connections on average.</p>2024-06-05T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3980Data quantitative and qualitative study in Brazilian Open Data Portals2024-04-15T11:54:36+00:00Shirlei L. O. do Carmoluolivver@hotmail.comClaudio F. R. Geyergeyer@inf.ufrgs.brJulio C. S. dos Anjosjcsanjos@ufc.br<p>Open data is a concept attributed to sharing data with anyone, and in addition to being accessed, this data can be manipulated and redistributed. The optimized and interchangeable use of open data can lead to so-called open innovation, which can be understood as the crossing of information between different organizations, to generate more complete and innovative systems and solutions. Despite the clear benefit for society, there are major challenges highlighted in different studies for its implementation, such as the lack of promotion of open data, the lack of standardization in data availability, as well as the lack of complete and updated information, among others. This study uses an available reproducible methodology, to show, through different dimensions, the open data panorama in Brazil, which indicates that there are many opportunities for improvement, in categories such as standardization of data exposure and its licenses, update rate, and, due to the absence of some data, the lack of promotion of open data.</p>2024-06-25T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/4010Fine Tuning of the BitCover Algorithm for Interactive VoD Streaming over 5G Cellular Networks2024-03-17T23:33:16+00:00Carlo Kleber da S. Rodriguescarlo.kleber@ufabc.edu.brVladimir Rochavladimir.rocha@ufabc.edu.br<p> </p> <p>The BitCover algorithm is an adaptation of the popular BitTorrent algorithm for interactive Video-on-Demand (VoD) streaming over 5G cellular networks. This algorithm has already proven to be an effective solution for granting adequate bandwidth utilization within each cell site, which turns out to be essential for unlocking the whole potential of the 5G technology. Despite its attractive performance, we though wonder if there is still space for optimization since four of its configuration parameters are identical to those of the original BitTorrent, being set with the same numerical values. These parameters are<em> δ<sub>t</sub></em> (unchoking time), <em>N<sub>p</sub></em> (number of neighbors a peer has),<em> y</em> (number of data slots a peer’s upload capacity is divided into), and <em>z</em> (number of peers randomly selected in optimistic unchoking). To tackle this issue, we therefore carry out simulation experiments to hopefully determine a more adequate configuration setup for the three first parameters just mentioned, leaving the specific analysis of the last parameter (i.e., <em>z</em>) implicit since it is directly related to the third parameter (i.e., <em>y</em>). Among the major findings, we highlight that BitCover’s original performance is enhanced at about 16.7% in terms of download rate, and at 50.1% in terms of discontinuity time. To complement this study, we also present a detailed competitive analysis against two other recent literature proposals, mainly to show the overall effectiveness of the optimized version of the BitCover algorithm. Within this context, our pivotal contribution is to offer helpful insights for designing protocols aimed at 5G cellular networks. Finally, this paper ends with general conclusions and outlines future directions.</p>2024-06-28T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3781A Privacy-Preserving Contact Tracing System based on a Publish-Subscribe Model2024-05-15T17:36:42+00:00Mikaella F. da Silvamikaellaferreira0@gmail.comBruno P. Santosbruno.ps@ufba.brPaulo H. L. Rettorepaulo.rettore.lopes@fkie.fraunhofer.deVinícius F. S. Motavinicius.mota@inf.ufes.br<p>In the context of the COVID-19 pandemic, using contact-tracking apps and measures such as social isolation and mask-wearing has emerged as an efficient strategy to mitigate the spread of the virus. Nonetheless, these apps have raised privacy concerns. This paper introduces a technique for enhancing Privacy in contact-tracing systems while preserving the data for research purposes. The contact-tracing system employs a unique identifier signed with a key associated with the application and the user. In this system, mobile devices serve as sensors sending beacons, actively detecting nearby devices, and transmitting the identifiers of surrounding contacts to a cloud-based platform. When a user reports a positive COVID-19 diagnosis, a dedicated web service identifies and tracks the identifiers associated with at-risk contacts. The system uses a topic-based publish-subscribe broker, and each identifier represents an individual topic to abstract contact communication and disseminate alert messages. To assess the system's efficacy, we conducted a use case with twenty volunteers using the mobile application for two weeks, representing a small university campus. The quantitative results of the use case demonstrated the system's capability of analyzing potential virus transmission and observing user's social interactions while maintaining their anonymity.</p>2024-08-11T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3835Design and implementation of intelligent packet filtering in IoT microcontroller-based devices2024-04-03T23:26:57+00:00Gustavo de Carvalho Bertolibertoli@ita.brGabriel Victor C. Fernandesgrabriel_victor@usp.brPedro H. Borges Monicipredroh.monici@usp.brCésar H. de Araujo Guibocesarguibo@usp.brAldri Luiz dos Santosaldri@dcc.ufmg.brLourenço Alvez Pereira Júniorljr@ita.br<p>Internet of Things (IoT) devices are increasingly pervasive and essential in enabling new applications and services. However, their widespread use also exposes them to exploitable vulnerabilities and flaws that can lead to significant losses. In this context, ensuring robust cybersecurity measures is essential to protect IoT devices from malicious attacks. However, the current solutions that provide flexible policy specifications and higher security levels for IoT devices are scarce. To address this gap, we introduce T800, a low-resource packet filter that utilizes machine learning (ML) algorithms to classify packets in IoT devices. We present a detailed performance benchmarking framework and demonstrate T800's effectiveness on the ESP32 system-on-chip microcontroller and ESP-IDF framework. Our evaluation shows that T800 is an efficient solution that increases device computational capacity by excluding unsolicited malicious traffic from the processing pipeline. Additionally, T800 is adaptable to different systems and provides a well-documented performance evaluation strategy for security ML-based mechanisms on ESP32-based IoT systems. Our research contributes to improving the cybersecurity of resource-constrained IoT devices and provides a scalable, efficient solution that can be used to enhance the security of IoT systems.</p>2024-08-27T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3799Combining Regular Expressions and Machine Learning for SQL Injection Detection in Urban Computing2024-04-03T23:28:31+00:00Michael S. Souzamichael.souza@aluno.uece.brSilvio E. S. B. Ribeirosilvio.ribeiro@aluno.uece.brVanessa C. Limavane.carvalho@aluno.uece.brFrancisco J. Cardosofco.cardoso@aluno.uece.brRafael L. Gomesrafaellgom@gmail.com<p>Given the vast amount of data generated in urban environments the rapid advancements in information technology urban environments and the continual advancements in information technology, several online urban services have emerged in recent years. These services employ relational databases to store the collected data, thereby making them vulnerable to potential threats, including SQL Injection (SQLi) attacks. Hence, there is a demand for security solutions that improve detection efficiency and satisfy the response time and scalability requirements of this detection process. Based on this existing demand, this article proposes an SQLi detection solution that combines Regular Expressions (RegEx) and Machine Learning (ML), called Two Layer approach of SQLi Detection (2LD-SQLi). The RegEx acts as a first layer of filtering for protection against SQLi inputs, improving the response time of 2LD-SQLi through RegEx filtering. From this filtering, it is analyzed by an ML model to detect SQLi, increasing the accuracy. Experiments, using a real dataset, suggest that 2LD-SQLi is suitable for detecting SQLi while meeting the efficiency and scalability issues.</p>2024-07-02T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3833POSITRON: Efficient Allocation of Smart City Multifunctional IoT Devices Aware of Computing Resources2024-04-22T10:53:45+00:00Leandro H. B. da Silvahenrique.leandro@academico.ifpb.edu.brJefferson L. F. da Silvaferreira.jefferson@academico.ifpb.edu.brRicardo Pereira Linsricardo.lins@academico.ifpb.edu.brFernando Menezes Matosfernando@ci.ufpb.brAldri Luiz dos Santosaldri@dcc.ufmg.brPaulo Ditarso Maciel Jr.paulo.maciel@ifpb.edu.br<p>Many IoT scenarios demand continuous capture of information from multifunctional sensors and smart units, as well as sending those data to cloud centers. However, allocating tasks to these sensors is not straightforward due to the urgency and priority that each type of data collection requires depending on the needs of the urban environment. This paper presents the POSITRON scheme for managing the sensing allocation in a multifunctional IoT network from previously defined policies. The policies take into account the characteristics of the applications running on the network and the different specifications of the available devices. We implemented POSITRON in a network simulator aiming to analyze its efficiency in allocating network resources. The results point out that considering the requirements demanded by applications and the distinct characteristics of multifunctional IoT devices brings benefits to resource allocation.</p>2024-07-02T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applicationshttps://journals-sol.sbc.org.br/index.php/jisa/article/view/3874CANCEL: A feature engineering method for churn prediction in a privacy-preserving context2024-07-03T18:16:05+00:00Gabriel T. Coimbragabriel.coimbra@ufv.brVictor Hugo R. Santosvictor.h.santos@ufv.brPedro A. Maiapedro.maia@ufv.brLetícia O. Silvaleticia.silva1@ufv.brRayanne P. Souzarayanne.souza@ufv.brFabrício A. Silvafabricio.asilva@ufv.brThais R. M. Braga Silvathais.braga@ufv.br<div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>This paper proposes a solution for predicting churn with privacy preservation by using edge computing. With the increasing popularity of smartphones, users are becoming more demanding regarding mobile app usage. Installing and removing an app are frequent routines and the ease of uninstallation can facilitate churn, which is customer abandonment. Companies seek to minimize churn since the cost of acquiring new customers is much higher than retaining current ones. To predict possible abandonment, organizations are increasingly adopting artificial intelligence (AI) techniques. Nevertheless, customers are becoming more concerned about their data privacy. In this context, we propose a technique called CANCEL, which creates attributes based on users' temporal behavior, with edge computing to predict churn locally, without transmitting users' data. The paper presents the evaluation of CANCEL in comparison to baseline solutions, the development of a mobile app integrated with the proposed method and deployed as an edge computing solution.</p> </div> </div> </div>2024-10-04T00:00:00+00:00Copyright (c) 2024 Journal of Internet Services and Applications