Computer Standards & Interfaces 97 (2026) 104106 Contents lists available at ScienceDirect Computer Standards & Interfaces journal homepage: www.elsevier.com/locate/csi A novel hybrid WOA–GWO algorithm for multi-objective optimization of energy efficiency and reliability in heterogeneous computing ∗ Karishma , Harendra Kumar Department of Mathematics and Statistics, Gurukula Kangri (Deemed to be University) Haridwar, Uttarakhand 249404, India ARTICLE INFO ABSTRACT Keywords: Heterogeneous computing systems are widely adopted for their capacity to optimize performance and energy Energy-efficient scheduling efficiency across diverse computational environments. However, most existing task scheduling techniques Heterogeneous computing address either energy reduction or reliability enhancement, rarely achieving both simultaneously. This study Hybrid WOA–GWO proposes a novel hybrid whale optimization algorithm–grey wolf optimizer (WOA–GWO) integrated with Metaheuristics dynamic voltage and frequency scaling (DVFS) and an insert-reversed block operation to overcome this Reliability optimization Sensitivity analysis dual challenge. The proposed Hybrid WOA–GWO (HWWO) framework enhances task prioritization using the dynamic variant rank heterogeneous earliest-finish-time (DVR-HEFT) approach to ensure efficient processor al- location and reduced computation time. The algorithm’s performance was evaluated on real-world constrained optimization problems from CEC 2020, as well as Fast Fourier Transform (FFT) and Gaussian Elimination (GE) applications. Experimental results demonstrate that HWWO achieves substantial gains in both energy efficiency and reliability, reducing total energy consumption by 55% (from 170.52 to 75.67 units) while increasing system reliability from 0.8804 to 0.9785 compared to state-of-the-art methods such as SASS, EnMODE, sCMAgES, and COLSHADE. The experimental results, implemented on varying tasks and processor counts, further demonstrate that the proposed algorithmic approach outperforms existing state-of-the-art and metaheuristic algorithms by delivering superior energy efficiency, maximizing reliability, minimizing computation time, reducing schedule length ratio (SLR), optimizing the communication-to-computation ratio (CCR), enhancing resource utilization, and minimizing sensitivity analysis. These findings confirm that the proposed model effectively bridges the existing research gap by providing a robust, energy-aware, and reliability-optimized scheduling framework for heterogeneous computing environments. 1. Introduction as multiprocessor task scheduling is an NP-hard optimization problem, delivering a valid solution within a predefined deadline remains a 1.1. Motivation significant challenge for real-time applications in heterogeneous sys- tems [4]. Alternatively, the delicate balance between performance In recent years, the exponential growth in data volume and com- and power consumption stands as a pivotal factor in the design of putational demands has propelled the development of heterogeneous multiprocessor systems [5]. To attain optimal performance, it is im- computing systems. Numerous computing resources are required for perative to implement efficient scheduling of applications across the various heterogeneous computing models, such as utility computing, diverse resources within heterogeneous computing systems, comple- peer-to-peer, and grid computing. These resources can be allocated mented by efficient runtime support mechanisms [6]. In multiprocessor through the network in order to fulfill the needs of carrying out systems, scheduling of tasks involves arranging the sequence of tasks high performing tasks [1]. Resource scheduling is a fundamental chal- and facilitating their execution across selected processors to achieve lenge in heterogeneous computing, especially as the number of tasks a predetermined goal, such as meeting deadlines, minimizing over- and resources increases. Inefficient task allocation can lead to pro- all execution time (makespan), conserving energy, enhancing system reliability, among other objectives [7]. cessor overutilization or underutilization, complicating the scheduling Efficiently managing energy consumption is pivotal in the design process [2]. Heterogeneous distributed computing has proven highly of heterogeneous distributed systems. This is essential as the dissi- effective in handling diverse and complex end-user tasks, driven by ad- pation of energy directly influences not only the development and vancements in network technologies and infrastructure [2,3]. However, ∗ Corresponding author. E-mail addresses: maths.karishma97@gmail.com (Karishma), balyan.kumar@gmail.com (H. Kumar). https://doi.org/10.1016/j.csi.2025.104106 Received 14 February 2025; Received in revised form 14 November 2025; Accepted 28 November 2025 Available online 7 December 2025 0920-5489/© 2025 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies. Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 operation of the system but also profoundly impacts the individuals HWWO leverages this approach to achieve superior scheduling perfor- within the living environment [8]. The rise in energy consumption mance by harnessing the complementary strengths of WOA and GWO has emerged as a significant concern, which has a direct impact on while mitigating their individual drawbacks. the costs associated with computing services. This consumption typ- The WOA demonstrates superior exploration capabilities through ically comprises dynamic energy resulting from switching activities its advanced updating mechanism, employing a randomized search and static energy arising from leakage currents [9]. Recognizing the approach to dynamically shift positions and navigate towards optimal importance of energy conservation, researchers have explored and solutions. As highlighted by [30], WOA strikes a good balance between developed several techniques to address this issue. These include mem- exploration and exploitation, exhibiting notable convergence speed in solving optimization problems. However, despite its effectiveness rela- ory optimization, DVFS, and resource hibernation. DVFS, also known tive to traditional algorithms, WOA can struggle to escape local optima as dynamic speed scaling (DSS), dynamic power scaling (DPS), and due to its encircling mechanism [31] and may fail to effectively refine dynamic frequency scaling (DFS), is particularly noteworthy for its the best solutions. Conversely, GWO excels in exploitation through potential to save energy [10,11]. This technique facilitates energy- strong local search capabilities but suffers from limited diversity in efficient scheduling by dynamically adjusting the supply voltage and the early stages, which can hinder global search. To address these frequency of a processor while tasks are running, thereby optimizing limitations, we have proposed a hybrid approach that augments WOA energy usage [12–14]. The implementation of dynamic voltage scaling with mutation operators and integrates it with GWO to enhance overall for energy-efficient optimization presents a noteworthy advancement. scheduling performance. Recognizing that excessive mutation can dis- Nevertheless, it is essential to acknowledge a potential drawback: an rupt previously discovered good solutions and impede convergence, we elevated risk of transient failures in processors, which could undermine have incorporated an insert-reversed block operation after mutation to the reliability of systems [9,15]. Reliability pertains to the probability preserve solution quality and improve the algorithm’s efficiency. that a schedule will successfully complete its execution within the This study endeavors to attain an optimized energy-efficient defined parameters [16,17]. Higher frequencies typically correspond scheduling algorithm with a maximal systems’ reliability, thus minimiz- to both high energy consumption and enhanced reliability, whereas ing the aggregate energy consumption of precedence-constrained tasks lower frequencies are associated with decreased energy consumption in parallel applications executed on heterogeneous computing systems. and reduced reliability [18]. When an application meets its designated The primary contributions of this research are succinctly outlined as reliability objective – referred to as a reliability goal, requirement, follows: assurance, or constraint in various studies – it is deemed reliable in • This article proposes innovative hybrid algorithmic approaches accordance with functional safety standards. These standards include that combine the WOA and GWO to tackle the intricate problems DO-178B for avionics systems, ISO 26262 for automotive systems, and related to energy-efficient tasks scheduling. IEC 61508 for a broad spectrum of industrial software systems [8,19]. • This study meticulously designs energy-efficient scheduling algo- rithms that leverage the hybrid WOA–GWO to effectively opti- 1.2. Our contributions mize two key objectives; energy consumption and the reliabil- ity of the system. The algorithms ensure compliance with the Task scheduling, an NP-hard problem, increases the complexity of deadline constraints of parallel applications. voltage adjustment choices in heterogeneous computing systems [20, • The proposed algorithm assigns tasks to suitable processors by 21]. Balancing energy efficiency and reliability presents a major chal- synergistically integrating the HWWO technique with DVFS tech- lenge, as prioritizing one often complicates optimizing the other [18]. nology. Here, the proposed algorithm applies the mutation opera- Scheduling algorithms are broadly classified into heuristics and meta- tor in conjunction with the insert-reversed block operation as part heuristics. Heuristic methods, which use greedy strategies for optimal of the HWWO technique and helps to mitigate the static energy selection [22], are computationally efficient but often fail to perform consumption. Additionally, the DVFS technique has been utilized to mitigate the dynamic energy consumption. well in complex or large-scale scheduling problems [21,23]. In contrast, • Comprehensive experimental evaluations are carried out by com- metaheuristic algorithms, inspired by natural processes, offer more reli- paring the proposed HWWO algorithm’s performance against sev- able results and greater flexibility [24]. Metaheuristics are popular due eral well-known algorithms, including the FFT, GE, and four to their simplicity, adaptability, independence from derivative-based benchmark algorithms from the ’CEC2020 Competition’ — SASS, methods, and ability to avoid local optima. Authors [25] developed EnMODE, sCMAgES, and COLSHADE. Furthermore, the algorithm an optimized gravitational search algorithm (GSA) to enhance feature- is subjected to testing on a set of unimodal benchmark test level fusion in multimodal biometric systems. Their work demonstrated functions. how metaheuristic optimization can effectively improve system perfor- • The experimental results demonstrate that the proposed algo- mance through better parameter tuning and search-space exploration. rithmic approach outperforms existing state-of-the-art and meta- Authors [26] developed a hybrid white shark optimizer–support vector heuristic algorithms by delivering superior energy efficiency, machine (WSO–SVM) model for gender classification from video data, maximizing reliability, minimizing computation time for tasks where the white shark optimizer was used to fine-tune SVM param- assignment, minimizing sensitivity analysis and SLR, CCR, and eters, leading to improved accuracy and faster processing compared enhancing resource utilization. These promising results hold true to traditional SVM methods. In [27] authors developed two novel across various scale conditions and deadline constraints. task scheduling models based on the metaheuristic GWO technique to • Conduct an evaluation of the proposed technique’s computational optimize energy consumption while minimizing computational time for complexity during execution and employ the Wilcoxon-signed parallel applications. In this article, a novel hybridization (HWWO) of rank statistical test to validate its performance. the WOA [28] and GWO [29] is employed to tackle the task scheduling The structure of the article is outlined as follows. Section 2 offers problem. Hybrid algorithms are designed to integrate the features of an extensive review of relevant literature and related works. Detailed various metaheuristic approaches, exploiting their synergy to address explanations of the WOA and GWO techniques are covered in Section 3. complex optimization challenges. This fusion not only enhances the In Section 4, pertinent models and problem formulations are explored, efficiency and flexibility of the algorithms but also augments their along with essential notations used throughout the study. Section 5 pro- overall performance, often surpassing that of traditional metaheuristic vides a thorough description of the proposed model. The development algorithms. Furthermore, a wide range of such hybrid algorithms has of simulation experiments to evaluate the proposed model is detailed been developed, driving the evolution of new-generation metaheuristics in Section 6. Finally, Section 7 elaborates on the study’s conclusion, that effectively balance exploration and exploitation. The proposed discussing limitations and directions for future research. 2 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 2. Literature review and physical systems. Among these, swarm intelligence (SI) methods form a prominent category, drawing on the collective behavior of The substantial energy consumption associated with computing living organisms. These algorithms utilize population-based, stochastic, systems poses a significant impediment to their rapid advancement. and iterative strategies. Numerous real-world challenges, including Therefore, minimizing energy usage while ensuring system reliability drone deployment, image processing, wireless sensor network localiza- has become a pressing concern for fostering sustainable computing tion, machine learning optimization, and others, have found effective methodologies. Researchers from multiple fields have devoted substan- solutions by employing SI techniques [2,50]. Beyond their diverse tial efforts to investigating the intricate challenges associated with task applications, SI algorithms have undergone continuous enhancements scheduling techniques that strike a balance between minimizing energy through modifications, hybridizations with other techniques [51], and consumption and maximizing system reliability. parallel computing implementations [52]. These efforts aim to further In the realm of sustainable computing systems, DVFS technique has optimize their performance and obtain superior solutions across di- emerged as a prominent and widely employed method for efficiently verse problem domains. SI algorithms are inspired by the collaborative curtailing energy consumption [32]. A study by [33] addressed energy- behaviors observed in various animal and insect communities, where aware task assignment for deadline-constrained workflow applications entities interact and respond to their environment collectively. This in heterogeneous computing environments. Earlier, [34] explored the- includes animal herds, ant colonies, fish schools, bacterial aggregations, oretical models for DVFS and proposed an energy-aware schedul- and bird flocks etc. These algorithms exhibit notable advantages in- ing strategy for single-processor platforms. Advanced algorithms like cluding adaptability, user-friendliness, and reliability [53]. Some of the enhanced energy-efficient scheduling (EES) [35] and DVFS-enabled recently developed SI algorithms are artificial bee colony (ABC) [54], energy-efficient workflow task scheduling (DEWTS) [36] were later ant colony optimization (ACO) [55], cuckoo optimization algorithm developed to reduce energy consumption in parallel applications. EES (COA) [56], particle swarm optimization (PSO) [57], horse herd op- uses DVFS to slack non-critical tasks while meeting time constraints, timization algorithm [58], krill herd (KH) [59], crow search algorithm while DEWTS enhances energy efficiency by selectively turning off (CSA) [60], GWO [29], sailfish optimizer (SFO) [61], and WOA [28] processors to minimize static energy consumption. In [37], authors etc. proposed the downward energy consumption minimization (DECM) This study implements the WOA algorithm to address energy- algorithm that innovatively transfers application deadlines to task-level efficient and reliable task scheduling in heterogeneous computing deadlines using deadline-slack and task level concepts, enabling low- environments. The approach enhances WOA by hybridizing it with complexity energy minimization. Authors [38] proposed a two-stage GWO, leveraging the strengths of both techniques. WOA, introduced solution to enhance the reliability of automotive applications while by Mirjalili and Lewis in 2016 [28], is a notable method in swarm satisfying energy and response time constraints. First, it solved response intelligence optimization. Inspired by the remarkable hunting strategies time reduction under energy constraint (RREC) via average energy employed by humpback whales in the vast ocean, this algorithm pre-allocation. While, the second stage enhanced reliability within the demonstrated competitive or superior performance compared to sev- remaining energy–time budgets from the RREC stage. Addressing the eral existing optimization methods [28]. Similar to WOA, GWO is challenge of energy-efficient tasks scheduling in cloud environments, a nature-inspired metaheuristic optimization algorithm based on the the authors of [39] proposed an algorithm based on DVFS that pri- social behavior and hunting strategies of grey wolves [29]. It is widely oritizes tasks by deadline, categorizes physical machines, and assigns used for solving complex optimization problems. WOA, recognized for tasks to nearby machines in the same priority class. Researchers in- its unique approach, has been effectively applied to scheduling tasks troduced the energy makespan multi-objective optimization algorithm in cloud computing, aiming to enhance system performance within for energy-efficient, low-latency workflow scheduling across fog–cloud constrained computing capacities [62]. The researchers proposed an resources [40]. The research work of [41,42] aimed to devise an innovative scheduling approach based on WOA that combined multi- approach that could reduce the overall execution time for parallel objective optimization with trust awareness. It mapped tasks to virtual applications running on high-performance distributed computing en- resources based on priorities, evaluated trust via SLA parameters, vironment, while concurrently enforcing adherence to predetermined and enforced deadline constraints for task execution on VMs [63]. energy consumption thresholds. The study tackled the challenge of scheduling tasks on heterogeneous Reliability-aware design algorithms aimed at ensuring reliability multiprocessor systems equipped with DVFS capabilities. The objective typically try to reduce certain objectives while also satisfying relia- was to optimize energy consumption while adhering to constraints bility requirements. Improving the reliability of parallel applications related to makespan and system reliability. To achieve this, [64] frequently results in longer schedules or higher energy usage. Opti- proposed an enhanced variant of the WOA, incorporating opposition- mizing both schedule length (or energy consumption) and reliability based learning and an individual selection strategy. In the article [65], concurrently poses a classic bi-criteria optimization problem requiring the authors introduced an improved whale algorithm (IWA) to opti- the identification of pareto-optimal solutions [43,44]. The researchers mize tasks allocation in multiprocessing systems (MPS), minimizing in [45] introduced an approach to reliably assign and schedule tasks energy consumption and makespan. They utilized DVFS and addressed on heterogeneous multiprocessor systems, tackling the complexities task scheduling’s NP-hard nature. The article [66] addressed power and potential failures associated with critical applications. Researchers consumption in cloud infrastructure, underscoring the necessity for in [46,47] tackled the intricate problem of workflow scheduling on energy-efficient algorithms and load balancing techniques. The authors heterogeneous computing systems, aiming to achieve high reliability employed various optimization algorithms, such as PSO, COA, and while minimizing the unnecessary duplication of resources. Energy WOA, to achieve efficient resource scheduling and mitigate energy consumption and reliability are closely intertwined concepts. In [48], consumption. researchers explored this relationship and developed a model that The researchers proposed an innovative tasks scheduling algorithm linked energy consumption to reliability levels. Their work aimed to leveraging the GWO technique for cloud computing environment [67]. maximize the reliability of parallel applications executed on uniproces- This GWO-based tasks scheduling (GWOTS) approach aimed to min- sor systems while adhering to strict deadlines and energy consumption imize execution costs, reduce energy consumption, and shorten the constraints. Authors in [49], proposed power management schemes overall makespan . Researchers developed an advanced multi-objective for homogeneous multiprocessors that targeted energy savings while optimization technique inspired by the GWO to address the growing upholding specified system reliability levels. computational demands on cloud data centers [68]. Their primary goals Metaheuristic techniques are versatile methods inspired by natu- were to maximize the efficient utilization of cloud resources, minimize ral phenomena, such as evolutionary adaptation, biological swarms, energy consumption, and reduce the overall execution time, while 3 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 1 Performance parameters of various WOA based metaheuristic tasks scheduling techniques. S. No. Reference Core technique Environment Considered parameters Energy Reliability 𝐶𝑇𝑇 𝐴 Sensitivity Makespan Resource utilization 1 [72] VWOA Heterogeneous 2 [73] WOA Heterogeneous 3 [74] M-WODE Heterogeneous 4 [75] WOA Heterogeneous 5 [76] h-DEWOA Heterogeneous 6 [77] IWC Homogeneous 7 [24] HGWWO Heterogeneous 8 [78] HWOA based MBA Homogeneous 9 [79] HPSWOA Heterogeneous 10 [80] CWOA Heterogeneous 11 [81] HWOA Heterogeneous 12 [82] WHOA Heterogeneous 13 [83] ANN-WOA Homogeneous 14 [84] WOA Heterogeneous 15 Proposed model HWWO Heterogeneous ensuring the requested services were delivered effectively. In [69], au- ranging from 30 m in length and weighting up to 180 tons. These thors proposed a novel hybrid model that combined PSO and GWO for giants of the sea are classified into various species, including the killer workflow scheduling in cloud computing environments. This integrated whale, the sei, the finback, the humpback, the minke, and the awe- approach aimed to enhance the overall performance by optimizing total inspiring blue whale. Beyond their sheer size, whales are remarkable execution costs and reducing the time required for task completion . for their intelligence and emotional depth, exhibiting complex social The article [70] addressed the challenges of tasks allocation and quality behaviors. They are often observed traveling and living in close-knit of service (QoS) optimization in cloud–fog computing environment. groups. One of the most fascinating species is the humpback whale, The authors proposed a multi-objectives GWO algorithm, implemented renowned for its intricate hunting technique known as bubble-net within the fog broker system, to minimize delay and energy consump- feeding [85]. When it comes to hunting, humpback whales exhibit a tion. In article [71], authors addressed tasks scheduling challenges in remarkable preference for preying on schools of krill or small fish that cloud computing by proposing a GWO-based algorithm. The approach congregate near the ocean’s surface. Their intricate hunting strategy aimed to efficiently allocate resources and minimize task completion times. involves diving to depths of around 12 m and then employing a WOA employs simple yet powerful search mechanisms to efficiently fascinating maneuver. The whales release a spiral of bubbles, carefully identify optimal solutions. However, like other SI algorithms, WOA can encircling their prey, and then gracefully swim upwards towards the face challenges such as getting trapped in local optima and maintaining surface, trapping their quarry within the bubble net. It is notewor- population diversity. To address these limitations, numerous WOA thy that this bubble-net feeding technique is a unique behavior that variants have been proposed, enhancing the core algorithm through has been observed exclusively in humpback whales. The exceptional modifications or hybridization. These improved versions have been hunting prowess of these whales has served as a source of inspiration successfully applied to a variety of optimization problems, including for the development of a swarm intelligence algorithm known as the task scheduling in distributed computing environment. WOA [28]. Proposed for solving continuous optimization problems, Table 1 provides a concise summary of recent studies that utilize the WOA aims to mimic the humpback whales’ remarkable hunting WOA-based metaheuristic techniques for task scheduling in distributed strategies. The WOA represents each potential solution as a whale systems. The table categorizes these approaches based on key per- searching for the optimal position, guided by the best solution found. formance parameters, such as energy consumption, reliability, 𝐶𝑇𝑇 𝐴 , It uses two mechanisms: encircling the prey (exploring promising sensitivity, makespan, and resource utilization. These parameters are areas) and creating bubble nets (exploiting by trapping targets). As chosen for their relevance in evaluating the proposed HWWO method in depicted in Fig. 1, humpback whales exhibit a remarkable coordi- this article. As evident from Table 1, while many studies consider these nated feeding strategy involving the creation of bubble nets to trap parameters individually or in limited combinations, none evaluate them as comprehensively as we have done in this work. This underscores and capture their prey. The exploration phase searches for potential the uniqueness of our approach and its potential to provide a more solution regions, while exploitation focuses on the most viable solutions well-rounded assessment of task scheduling optimization. within those areas, balancing exploration and exploitation for efficient optimization 3. Preliminaries The following discussion aims to provide a succinct yet comprehen- 3.1.1. A mathematical-based model sive understanding of the core concepts driving the WOA and GWO This section initially models the whale behaviors of encircling tar- metaheuristics. Additionally, it shall elucidate the distinctive features gets, prey searching, and spiral bubble-net feeding maneuvers mathe- of these algorithms and their versatile applicability across a wide matically. spectrum of problem spaces. 3.1.1.1. Encircling prey. The WOA algorithm takes inspiration from the 3.1. Whale optimization algorithm hunting behavior of humpback whales, which can effectively locate and encircle prey. Since the optimal solution’s position is not known ini- Whales are majestic creatures that captivate the imagination. Among tially, the algorithm assumes the current best solution is near the global the animal kingdom, they hold the distinction of being the largest optimum. The other candidate solutions then attempt to update their mammals on earth. An adult whale can reach staggering proportions, positions towards this best solution identified so far. This encircling and 4 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 1. Coordinated feeding strategy employed by humpback whales involving bubble nets. localization process is mathematically modeled through Eqs. (1) and 𝜈 signifies a randomly generated scalar quantity constrained (2). within the bounds of [−1, 1]. → → → → → → → 𝐷 = | 𝜉 ∗ 𝑍 ∗ (𝑟) − 𝑍(𝑟)| (1) 𝐷′ = |𝑍 ∗ (𝑟) − 𝑍(𝑟)| (6) → → → → → → → ∗ 𝑍(𝑟 + 1) = |𝑍 (𝑟) − 𝜁 ∗ 𝐷| (2) 𝑍(𝑟 + 1) = 𝐷 ∗ 𝑒 ′ 𝑏𝜈 ∗ ∗ 𝑐𝑜𝑠(2𝜋𝜈) + 𝑍 (𝑟) (7) The Eqs. (1) and (2) involve several variables and vectors. The variable → → 𝑟 denotes the current iteration number. 𝜁 and 𝜉 represent coefficient The humpback whale circles its prey in a tightening spiral pattern while → hunting. The WOA models this by randomly choosing between the vector quantities. The vector 𝑍 ∗ representing the current best solution shrinking encirclement or spiral model, each with 50% probability, to must be updated during any iteration where a superior solution is → update whale positions during optimization. This stochastic positional identified. The vector 𝑍 indicates another position vector being consid- update process is given by Eq. (8), wherein 𝑝 denotes a randomly ered. The absolute value operation is represented by the ∥ symbol. The → → generated scalar confined within the range [0,1]. calculation of the coefficient vectors 𝜁 and 𝜉 proceeds in the following manner [85]: ⎧ → → → → ⎪ 𝑍 ∗ (𝑟) − 𝜁 ∗ 𝐷 if 𝑝 < 0.5 → → → → 𝑍(𝑟 + 1) = ⎨ → (8) 𝜁 = 2 ∗ 𝜇 ∗ 𝑙1 − 𝜇 (3) ⎪ 𝐷′ ∗ 𝑒𝑏𝜈 ∗ 𝑐𝑜𝑠(2𝜋𝜈) + 𝑍 ∗ (𝑟) if 𝑝 ≥ 0.5 → → ⎩ 𝜉 = 2 ∗ 𝑙2 (4) 3.1.1.3. Exploration phase (searching for prey strategy). This strategy → facilitates the whales in surveying the problem domain to uncover In Eq. (3), the parameter 𝜇 linearly decreases from 2 to 0 across all unexplored regions and augment the diversity within the population. A iterations, encompassing the exploration and exploitation phases, while → randomly selected search agent dictates the positional update for each 𝑙1 , 𝑙2 are random vectors in the range [0, 1]. The formulation of 𝜇 can → be expressed as [2]: individual whale. The parameter 𝜁 enables steering the search agent { } away from an arbitrarily chosen humpback whale. The exploration → 2 𝜇 =2−𝑟∗ (5) phase, governed by Eq. (10), prevents the premature convergence to 𝑚𝑎𝑥_ 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 local optima [28]. 3.1.1.2. Exploitation phase (bubble-net attacking method). Two distinct → → → methodologies have been proposed aimed at constructing mathematical 𝐷 = |𝜉 ∗ 𝑍 𝑟𝑎𝑛𝑑 − 𝑍(𝑟)| (9) → → → models to characterize the bubble-net feeding behavior exhibited by 𝑍(𝑟 + 1) = 𝑍 𝑟𝑎𝑛𝑑 − 𝜁 ∗ 𝐷 (10) humpback whales. → i Shrinking encircling mechanism: This behavior is modeled by where, 𝑍 𝑟𝑎𝑛𝑑 denotes a randomly selected whale position vector from → the current population within the search space. diminishing the convergence parameter 𝜇 in Eq. (3). Moreover, → → the oscillation range of 𝜁 contracts linearly via 𝜇, transitioning → 3.2. Grey wolf optimization algorithm from 2 to 0 across iterations. Stated differently, 𝜁 represents a random value within the interval [−𝜇, 𝜇]. The GWO is a SI technique that emulates the hierarchical leadership ii Spiral updating position mechanism: The approach structure and cooperative hunting strategies exhibited by grey wolves → commences by quantifying the distance between the vector 𝑍 ∗ , in their natural habitats [29]. The GWO algorithm mathematically representing the best solution identified thus far, and another formalizes the search, encirclement, and attack behaviors observed in → whale position vector 𝑍, through Eq. (6). Subsequently, it de- the predatory conduct of grey wolves. It incorporates the hierarchical fines the spiral motion pattern originating from the present social structure present within wolf packs as a core concept. Wolves location and progressing towards an enhanced solution, as ex- are classified into four distinct hierarchical tiers based on their levels pressed through Eq. (7). In these equations, the constant 𝑏 of dominance. The 𝛼, 𝛽, and 𝛿 ranks represent the leaders, presumed governs the logarithmic spiral’s geometric characteristics, while to possess superior capabilities that guide the pack. In contrast, the 𝜔 5 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 2. Organizational structure and role hierarchy in grey wolf packs. wolves assume a subordinate role, following the navigation directives 3.2.2. Hunting provided by the dominant leaders (see Fig. 2). The grey wolf algorithm mimics the intricate hunting strategies GWO employs mathematical models that emulate the intricate hunt- employed by wolf packs in nature. Central to this optimization process ing tactics exhibited by grey wolves, including their pursuit, encir- is the 𝛼 wolf, acting as the lead entity guiding the search for the optimal clement, and eventual capture of prey, as a framework to guide the solution. Through iterative refinement, the 𝛼 continuously updates and optimization process. stores the best solution encountered, replacing it with an improved one if found in subsequent iterations. This iterative refinement allows 3.2.1. Social behavior convergence towards the optimal result. While the 𝛼 takes the lead, the The mathematical formulation supposes 𝛼 to be the preeminent 𝛽 and 𝛿 wolves contribute their prowess to the hunt. By mathematically solution, embodying the social behavior of the lead wolf. The subse- simulating the hunting behavior, the algorithm assumes that the 𝛼, 𝛽, quent solutions, 𝛽 and 𝛿, constitute the second and third-best outcomes, and 𝛿 solutions possess superior knowledge of the potential optimal respectively. All remaining solutions are collectively classified as 𝜔. The position. Consequently, the top three candidate solutions are retained. hunting process within the GWO algorithm is steered by the triumvirate The remaining search agents, including 𝜔, must update their positions of 𝛼, 𝛽, and 𝛿, while the 𝜔 solutions are governed by adherence to this based on the 𝛼 location. In essence, the 𝛼, 𝛽, and 𝛿 predict the optimal leading trio. location, while other wolves randomly explore the surrounding areas, Encircling the prey: The predatory strategy of grey wolves involves driven by the overarching goal of locating the prey — the global opti- meticulously encircling and confining their prey during the hunt. This mum. The positions of the wolves are iteratively updated through the critical stage of encirclement is mathematically modeled by the ensuing subsequent mathematical equations that emulate the hunting behavior. system of equations: → → → → ⎫ → → → → 𝛶𝛼 = | 𝜉 1 ∗ 𝑍 𝛼 − 𝑍| ⎪ 𝛶 = | 𝜉 ∗ 𝑍 𝑝 (𝑟) − 𝑍(𝑟)| (11) → → → → ⎪ 𝛶𝛽 = | 𝜉 2 ∗ 𝑍 𝛽 − 𝑍| ⎬ (13) → → → → → → → → ⎪ 𝑍(𝑟 + 1) = 𝑍 𝑝 (𝑟) − 𝜁 ∗ 𝛶 (12) 𝛶𝛿 = | 𝜉 3 ∗ 𝑍 𝛿 − 𝑍| ⎪ ⎭ → → → → → In the given context, the variable 𝛶 symbolizes the vector distance 𝑍1 = 𝑍𝛼 − 𝜁 1 ∗ 𝛶 𝛼 (14) separating the prey’s location from the wolf’s position. The variable → → → → 𝑟 denotes the current iteration number, while (𝑟 + 1) signifies the 𝑍2 = 𝑍𝛽 − 𝜁 2 ∗ 𝛶 𝛽 (15) → → → → → iteration number that follows. The variable 𝑍 𝑝 (𝑟) signifies the position 𝑍3 = 𝑍𝛿 − 𝜁 3 ∗ 𝛶 𝛿 (16) → of the prey within the optimization process, whereas 𝑍(𝑟) denotes → → → → 𝑍1 + 𝑍2 + 𝑍3 the position of the wolf. These variables are employed to model the 𝑍(𝑟 + 1) = (17) interaction between the prey and the wolf, which is a crucial aspect of 3 the optimization algorithm being discussed. The optimization algorithm iteratively calculates and refines the 3.2.3. Exploitation (attacking prey) → → coefficient vectors 𝜁 and 𝜉 through the use of the mathematical ex- The hunting process involves a strategy employed by the grey pressions represented by Eqs. (3) and (4), which are provided as wolves to restrict the prey’s mobility, rendering it vulnerable to an follows: attack. This approach is implemented by gradually decreasing the value → → → → → → → of a parameter 𝜇(decrease from 2 to 0). Concurrently, the value of → 𝜁 = 2 ∗ 𝜇 ∗ 𝑙1 − 𝜇 and 𝜉 = 2 ∗ 𝑙2 another parameter, 𝜁 , is also reduced in accordance with the value of → → where, the parameter 𝜇 linearly decreases from 2 to 0 across all 𝜇, ensuring it remains within the range of [−1, 1]. The grey wolves → iterations, encompassing the exploration and exploitation phases, while initiate an attack on the prey when the value of 𝜁 falls between −1 𝑙1 , 𝑙2 are random vectors in the range [0, 1]. and 1. 6 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 3. Dynamic positioning of search agents based on parameter interactions. 3.2.4. Exploration (search for prey) Table 2 The lead wolves, 𝛼, 𝛽, and 𝛿, strategically position themselves in Key symbols for the present work. a manner that balances the pursuit of the prey with the readiness to Notation Description → strike. This dual approach is modeled through a parameter 𝜁 , where 𝐺 Direct acyclic graph (DAG) representing the distributed parallel application values exceeding 1 represent a diversion from the prey’s immediate 𝑋 = {𝜏1 , 𝜏2 , … , 𝜏|𝑋 } Set of |𝑋| tasks vicinity, yet still within striking range. Another influential factor gov- 𝑌 = {𝑌1 , 𝑌2 , … , 𝑌|𝑌 | } Set of |𝑌 | processors → erning the exploration process is denoted by 𝜉 , which plays a crucial 𝑐̂𝑖,𝑘 Worst-case response time between the tasks 𝜏1 and 𝜏𝑘 𝑤̂ 𝑖,𝑙 Worst-case execution time of the task 𝜏𝑖 on the role, particularly in scenarios where the algorithm encounters local → processor 𝑌𝑙 optima. The range of 𝜉 lies between 0 and 2, and its value is determined LB(G) Lower bound of G through a mathematical expression, labeled as Eq. (4). DL(G) Deadline of application G MS(G) Makespan of application G Fig. 3 illustrates a search agent dynamically positioning itself within 𝐸̂ 𝑠 (𝐺) Static energy consumption of G a search space using 𝛼, 𝛽, and 𝛿 parameters. The agent’s final po- 𝐸̂ 𝑑 (𝜏𝑖 , 𝑌𝑙 , 𝑓𝑙,ℎ ) The dynamic energy consumption of task𝜏𝑖 on sition, representing an estimated prey location, is depicted within a processor 𝑌𝑙 with frequency 𝑓𝑙,ℎ circle defined by these parameters. Surrounding agents adapt their 𝐸̂ 𝑑 (𝐺) Dynamic energy consumption of G positions around this estimated location, introducing randomness and 𝐸̂ 𝑡𝑜𝑡𝑎𝑙 (𝐺) The aggregate energy consumption of G 𝑅𝑒 (𝐺) Reliability of the application G coordinated behavior akin to predators. 𝑅𝑒 (𝜏𝑖 , 𝑌𝑙 , 𝑓𝑙,ℎ ) Reliability of task 𝜏𝑖 executed on the 𝑌𝑙 with frequency 𝑓𝑙,ℎ 4. Notations and mathematical modeling 𝑅𝑒(𝑚𝑖𝑛) (𝐺) Minimum reliability value of G 𝑅𝑒(𝑚𝑎𝑥) (𝐺) Maximum reliability value of G 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) Reliability goal of the application G 4.1. Notations 𝜆𝑙,ℎ Failure rate of processor 𝑌𝑙 at frequency 𝑓𝑙,ℎ 𝑅𝑒(𝑚𝑖𝑛) (𝜏𝑖 ) Minimum reliability value of 𝜏𝑖 The key symbols and their meanings, as employed in the present 𝑅𝑒(𝑚𝑎𝑥) (𝜏𝑖 ) Maximum reliability value of 𝜏𝑖 work, are summarized in Table 2. 𝑅𝑒 (𝜏𝑖 ) Reliability of task 𝜏𝑖 𝐸𝑆𝑇 (𝜏𝑘 , 𝑌𝑙 ) Earliest start time of 𝑘th task on processor 𝑌𝑙 𝐸𝐹 𝑇 (𝜏𝑘 , 𝑌𝑙 ) Earliest finish time of 𝑘th task on processor 𝑌𝑙 4.2. Application model 𝐴𝐹 𝑇 (𝜏𝑘 ) Actual finish time of 𝑘th task The directed acyclic graph (DAG) serves as a versatile represen- tation widely adopted in academic research for modeling distributed parallel applications. In this study, the application is effectively mod- zero-weight dependencies are introduced into the graph to maintain eled as a DAG, denoted as 𝐺 = (𝑋, 𝐸, ̂ 𝑊̂ , 𝐶). ̂ This model encompasses consistency [8]. For |𝑌 | no. of processors, 𝑤̂ 𝑖,𝑙 ∈ 𝑊̂ |𝑋|×|𝑌 | gives the a set 𝑋, comprising various computational tasks with distinct worst- 𝑊̂ 𝐶𝐸𝑇 of task 𝜏𝑖 on processor 𝑌𝑙 . case execution times (𝑊̂ 𝐶𝐸𝑇 𝑠) on different processors. Furthermore, The expressions LB(G) and DL(G) represent the lower bound and the model incorporates a set 𝐸, ̂ representing communication edges deadline of G, respectively. It is imperative to maintain a condition between these tasks. Each element in 𝐸, ̂ represented as 𝑒̂𝑖,𝑘 , signifies wherein the lower bound of an application remains below its associated a communication link from task 𝜏𝑖 to 𝜏𝑘 , accompanied by a precedence deadline, i.e., 𝐿𝐵(𝐺) ≤ 𝐷𝐿(𝐺) In the course of this work, the concept of constraint that mandates task 𝜏𝑘 to commence only upon the com- lower bound pertains to the minimal achievable makespan by an appli- pletion of task 𝜏𝑖 . Consequently, every 𝑐̂𝑖,𝑘 represents the worst-case response time (𝑊̂ 𝐶𝑅𝑇 ) of 𝑒̂𝑖,𝑘 . The set 𝑠𝑢𝑐𝑐(𝜏𝑖 ) represents the immediate cation developed through a conventional scheduling algorithm, where successor tasks of 𝜏𝑖 , and 𝑝𝑟𝑒𝑑(𝜏𝑖 ) represents the immediate predecessor each processor is singularly dedicated to the application, operating at tasks of 𝜏𝑖 . Tasks lacking predecessors are designated as 𝜏𝑒𝑛𝑡𝑟𝑦 , while its maximum frequency [36,86]. 𝑀𝑆(𝐺) denotes the actual makespan those lacking successors are designated as 𝜏𝑒𝑥𝑖𝑡 . In instances where achieved by application G, which signifies the precise conclusion time a function includes multiple 𝜏𝑒𝑛𝑡𝑟𝑦 or 𝜏𝑒𝑥𝑖𝑡 tasks, dummy tasks with of 𝜏𝑒𝑥𝑖𝑡 within the corresponding 𝐷𝐴𝐺. 7 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 4. Example of a DAG featuring 10 tasks. Table 3 exhibits a dependency on frequency. The parameter 𝜙 signifies the (𝑊̂ 𝐶𝐸𝑇 𝑠) of tasks in Fig. 4. system states and serves as an indicator of whether dynamic power is Tasks \Processors 𝑌1 𝑌2 𝑌3 presently being consumed within the system, where 𝜙 = 1 signifies an 𝜏1 14 16 9 active state, and 𝜙 = 0 represents an inactive condition. The term 𝐶𝑒𝑓 𝜏2 13 19 18 symbolizes the effective switching capacitance. The exponent m, known 𝜏3 11 13 19 𝜏4 13 8 17 as the dynamic power exponent with a minimum value of 2. Both 𝐶𝑒𝑓 𝜏5 12 13 10 and m are processor-specific constants. 𝜏6 13 16 9 Strategically reducing the operating frequency presents an avenue 𝜏7 7 15 11 to curb frequency-dependent power dissipation. However, prolonged 𝜏8 5 11 14 𝜏9 18 12 20 execution times may ensue, augmenting static power consumption and 𝜏10 21 7 16 frequency-independent power expenditure. Several studies, including those conducted by [8,87], and [88], have established the existence of an optimal energy-efficient frequency, denoted as 𝑓𝑒𝑒 , at which the system achieves minimal power consumption. This optimal frequency Fig. 4 illustrates a DAG-based parallel application as an example [8, can be formulated as follows: 86]. This demonstration comprises 10 tasks processed across three √ designated processors {𝑌1 , 𝑌2 , 𝑌3 }. In the illustration, the weight of 18 𝑃𝑖𝑛𝑑 𝑓𝑒𝑒 = 𝑚 (19) on the edge 𝑒̂1,2 connecting 𝜏1 and 𝜏2 symbolizes the response time, (𝑚 − 1)𝐶𝑒𝑓 denoted as 𝑐̂1,2 , if 𝜏1 and 𝜏2 are not allocated to the same processor. The data in Table 3 represents the worst-case execution times Under the premise that a processor’s operating frequency can vary (𝑊̂ 𝐶𝐸𝑇 𝑠) corresponding to the maximum frequency illustrated in Fig. between a minimum available frequency, 𝑓𝑚𝑖𝑛 , and a maximum fre- 4. The value of 14 assigned to the intersection of 𝜏1 and 𝑌1 in Table quency, 𝑓𝑚𝑎𝑥 , the optimal energy-efficient frequency for executing a 3 denotes the (𝑊̂ 𝐶𝐸𝑇 𝑠), symbolized as 𝑤̂ 1,1 = 14. The variations in given task should adhere to the following formulation: (𝑊̂ 𝐶𝐸𝑇 𝑠) for an identical task across different processors arise from 𝑓𝑙𝑜𝑤 = 𝑚𝑎𝑥(𝑓𝑒𝑒 , 𝑓𝑚𝑖𝑛 ) (20) the intrinsic diversity of the processors. As a result, any of the actual effective frequencies, denoted as 𝑓ℎ , 4.3. Power and energy models should reside within the range delineated by 𝑓𝑙𝑜𝑤 ≤ 𝑓ℎ ≤ 𝑓𝑚𝑎𝑥 [8]. For a system with |𝑌 | heterogeneous processors, each processor Considering the nearly linear correlation between voltage and fre- requires individual power parameters. Here, the static power set is quency, DVFS techniques are employed to scale down these parameters, defined as: thereby achieving energy conservation. Consistent with the approaches adopted in [8,87], the term frequency change is utilized to denote {𝑃1,𝑠 , 𝑃2,𝑠 , … , 𝑃|𝑌 |,𝑠 } the simultaneous alteration of both voltage and frequency. For DVFS- capable systems, a widely adopted system-level power model, as ex- frequency-independent and dependent dynamic power sets are repre- emplified in [8,87], is leveraged. This model expresses the power sented as: consumption at a given frequency (f) as follows: {𝑃1,𝑖𝑛𝑑 , 𝑃2,𝑖𝑛𝑑 , … , 𝑃|𝑌 |,𝑖𝑛𝑑 } and {𝑃1,𝑑 , 𝑃2,𝑑 , … , 𝑃|𝑌 |,𝑑 } 𝑃 (𝑓 ) = 𝑃𝑠 + 𝜙(𝑃𝑖𝑛𝑑 + 𝑃𝑑 ) = 𝑃𝑠 + 𝜙(𝑃𝑖𝑛𝑑 + 𝐶𝑒𝑓 𝑓 𝑚 ) (18) the effective switching capacitance is defined as: Within this power model, 𝑃𝑠 symbolizes the static power component, which can be mitigated solely by deactivating the entire system. The {𝐶1,𝑒𝑓 , 𝐶2,𝑒𝑓 , … , 𝐶|𝑌 |,𝑒𝑓 } frequency-independent dynamic power is represented by 𝑃𝑖𝑛𝑑 , and this lowest energy-efficient frequency set is represented as: component can be eliminated by transitioning the system into a low- power sleep state. 𝑃𝑑 denotes the dynamic power component that {𝑓1,𝑙𝑜𝑤 , 𝑓2,𝑙𝑜𝑤 , … , 𝑓|𝑌 |,𝑙𝑜𝑤 } 8 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 and actual effective frequency set is: To determine the application’s reliability bounds, an evaluation across { all available processors is conducted. The minimum and maximum {𝑓1,𝑙𝑜𝑤 , 𝑓1,𝑐 , 𝑓1,𝑑 , … , 𝑓1,𝑚𝑎𝑥 }, {𝑓2,𝑙𝑜𝑤 , 𝑓2,𝑐 , 𝑓2,𝑑 , … , 𝑓2,𝑚𝑎𝑥 }, … } reliability values are then derived using the respective equations: {𝑓|𝑌 |,𝑙𝑜𝑤 , 𝑓|𝑌 |,𝑐 , 𝑓|𝑌 |,𝑑 , … , 𝑓|𝑌 |,𝑚𝑎𝑥 } 𝑅𝑒(𝑚𝑖𝑛) (𝜏𝑖 ) = min 𝑅𝑒 (𝜏𝑖 , 𝑌𝑙 , 𝑓𝑙,𝑙𝑜𝑤 ) (30) 𝑌𝑙 ∈𝑌 Let 𝐸̂ 𝑠 (𝐺) denote the static energy consumed by active processors ∏ executing application G. Since inactive processors do not consume 𝑅𝑒(𝑚𝑎𝑥) (𝜏𝑖 ) = 1 − (1 − 𝑅𝑒 (𝜏𝑖 , 𝑌𝑙 , 𝑓𝑙,𝑚𝑎𝑥 )) (31) energy, 𝐸̂ 𝑠 (𝐺) is the sum of static energy consumption across all active 𝑌𝑙 ∈𝑌 processors, calculated as: As per Eq. (29), the reliability of application G is the product of task |𝑌 | ∑ reliabilities. Hence the minimum and maximum reliability values of G ( ) 𝐸̂ 𝑠 (𝐺) = 𝑃𝑙,𝑠 ∗ 𝑀𝑆(𝐺) (21) can be computed as: 𝑙=1,𝑌𝑙 is on ∏ 𝑅𝑒(𝑚𝑖𝑛) (𝐺) = (𝜏𝑖 ) (32) The dynamic power consumption of task 𝜏𝑖 executing on 𝑌𝑙 at frequency 𝑒(𝑚𝑖𝑛) 𝑓𝑙,ℎ is represented by 𝑃𝑑 (𝜏𝑖 , 𝑌𝑙 , 𝑓𝑙,ℎ ). This can be formulated as: ∏ 𝑅𝑒(𝑚𝑎𝑥) (𝐺) = (𝜏𝑖 ) (33) 𝑚 𝑃𝑑 (𝜏𝑖 , 𝑌𝑙 , 𝑓𝑙,ℎ ) = (𝑃𝑙,𝑖𝑛𝑑 + 𝐶𝑙,𝑒𝑓 ∗ 𝑓𝑙,ℎ𝑙 ) (22) 𝑒(𝑚𝑎𝑥) The application is deemed reliable if its reliability metric satisfies the And the dynamic energy consumption of 𝜏𝑖 is calculated as: specified reliability goal, denoted as 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) i.e., 𝑚 𝑓𝑙,𝑚𝑎𝑥 𝐸̂ 𝑑 (𝜏𝑖 , 𝑌𝑙 , 𝑓𝑙,ℎ ) = (𝑃𝑙,𝑖𝑛𝑑 + 𝐶𝑙,𝑒𝑓 ∗ 𝑓𝑙,ℎ𝑙 ) ∗ ∗ 𝑤̂ 𝑖,𝑙 (23) 𝑓𝑙,ℎ 𝑅𝑒(𝑚𝑖𝑛) (𝐺) ≤ 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) ≤ 𝑅𝑒(𝑚𝑎𝑥) (𝐺) (34) Let 𝐸̂ 𝑑 (𝐺) represent the total dynamic energy consumption of applica- tion G, calculated as the sum of dynamic energy consumed by each 4.5. Description of scheduling problem task. |𝑋| ∑ The focus of this subsection is on the tasks scheduling problem 𝐸̂ 𝑑 (𝐺) = 𝐸̂ 𝑑 (𝜏𝑖 , 𝑌𝑎𝑐(𝑖) , 𝑓𝑎𝑐(𝑖),ℎ𝑧(𝑖) ) (24) in distributed computing environments consisting of parallel applica- 𝑖=1 tions G and heterogeneous processor set Y. Specifically, it addresses a where 𝑌𝑎𝑐(𝑖) signifies the processor and 𝑓𝑎𝑐(𝑖),ℎ𝑧(𝑖) represents the fre- scenario where tasks must be executed on a set Y of processors that quency at which task 𝜏𝑖 is actively executing. support varying frequency levels. This assignment must simultaneously Hence, the aggregated energy consumption of G can be deduced optimize two critical objectives: minimizing overall energy consump- using the ensuing formulation: tion and ensuring the application’s reliability metric 𝑅𝑒 (𝐺) meets or surpasses a predefined reliability goal, 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺). Ultimately, the goal 𝐸̂ 𝑡𝑜𝑡𝑎𝑙 (𝐺) = 𝐸̂ 𝑠 (𝐺) + 𝐸̂ 𝑑 (𝐺) (25) is to judiciously map tasks to processor-frequency combinations that strike an optimal balance between reducing energy usage and boosting 4.4. Reliability model and reliability goal system reliability for the given application. 𝐸̂ 𝑡𝑜𝑡𝑎𝑙 (𝐺) = 𝐸̂ 𝑠 (𝐺) + 𝐸̂ 𝑑 (𝐺) The concept of processor reliability probability adhering to a Pois- ∏ son distribution has been extensively studied and widely accepted subject to: 𝑅𝑒 (𝐺) = 𝑅𝑒 (𝜏𝑖 ) ≥ 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) 𝜏𝑖 ∈𝑋 within the relevant literature [8]. The variable 𝜆𝑙 denotes the failure rate per unit time of processor 𝑌𝑙 and the reliability of a task 𝜏𝑖 executed 5. Proposed hybrid approach on processor 𝑌𝑙 within its 𝑊̂ 𝐶𝐸𝑇 can be quantified using the following mathematical expression: This study presents an inventive methodology that seamlessly inte- 𝑅𝑒 (𝜏𝑖 , 𝑌𝑙 ) = 𝑒−𝜆𝑙 ∗𝑤̂ 𝑖,𝑙 (26) grates the HWWO algorithm and the DVFS technique, addressing the crucial need for energy-efficient tasks scheduling strategies that can For DVFS-enabled processors, research shows varying failure rates adeptly address both static and dynamic energy considerations. The across frequencies. Let 𝜆𝑙,𝑚𝑎𝑥 denote the failure rate of processor 𝑌𝑙 at dynamic adjustment of processor voltage and frequency, facilitated by maximum frequency. Then, the failure rate 𝜆𝑙,ℎ of 𝑌𝑙 at frequency 𝑓𝑙,ℎ the DVFS mechanism, enables energy consumption optimization during is calculated as: task execution. This integrated methodology possesses the potential 𝑑(𝑓𝑙,𝑚𝑎𝑥 −𝑓𝑙,ℎ ) to revolutionize tasks scheduling in computational environments by 𝜆𝑙,ℎ = 𝜆𝑙,𝑚𝑎𝑥 ∗ 10 𝑓𝑙,𝑚𝑎𝑥 −𝑓𝑙,𝑚𝑖𝑛 (27) achieving substantial energy savings, enhancing system reliability, and Where the constant d indicates the sensitivity of failure rates to voltage concurrently optimizing computational time. The HWWO algorithm scaling. employs a mutation operator in conjunction with an insert-reversed Subsequently, a correlation is established between task reliability block operation, contributing to the minimization of static energy and frequency, as outlined by Eqs. (26) and (27) i.e., the reliability consumption. Complementing this, the DVFS technique is strategically of the task 𝜏𝑖 executed on the processor 𝑌𝑙 with the frequency 𝑓𝑙,ℎ is employed to tackle dynamic energy consumption, further augment- calculated as follows: ing the energy-efficiency of the proposed solution. The incorporation 𝑤̂ ∗𝑓 −𝜆𝑙,ℎ ∗ 𝑖,𝑙 𝑓 𝑙,𝑚𝑎𝑥 of the WOA and the GWO into the task assignment methodology is 𝑅𝑒 (𝜏𝑖 , 𝑌𝑙 , 𝑓𝑙,ℎ ) = 𝑒 𝑙,ℎ substantiated by their exceptional versatility. These techniques have 𝑑(𝑓𝑙,𝑚𝑎𝑥 −𝑓𝑙,ℎ ) consistently exhibited superior performance in addressing assignment 𝑤̂ ∗𝑓 −𝜆𝑙,𝑚𝑎𝑥 ∗10 𝑓𝑙,𝑚𝑎𝑥 −𝑓𝑙,𝑚𝑖𝑛 ∗ 𝑖,𝑙 𝑓 𝑙,𝑚𝑎𝑥 =𝑒 𝑙,ℎ (28) issues, demonstrating remarkable convergence characteristics [89]. In any SI algorithm, achieving a balance between exploitation and The overall reliability of an application can be expressed as the product exploration is crucial for its effectiveness in terms of convergence of the reliability values associated with each of its constituent tasks. The speed and solution quality. As underscored in [30], the WOA algorithm reliability value of the application G is denoted as: showcases remarkable performance regarding convergence speed while ∏ adeptly balancing exploration and exploitation in addressing optimiza- 𝑅𝑒 (𝐺) = 𝑅𝑒 (𝜏𝑖 ) (29) 𝜏𝑖 ∈𝑋 tion issues. Although the WOA demonstrates effectiveness compared 9 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 to conventional algorithms, it may encounter challenges such as strug- 1. Determine task weights through the calculation of the average gling to evade local optima because of its encircling search mechanism execution time across all processors, mirroring the HEFT algo- and insufficient solution enhancement after each iteration. These limi- rithm’s methodology. In the HEFT algorithm, the function 𝑓 (𝑤̂ 𝑖 ) tations of WOA prompted the proposal of a hybrid approach with GWO. is computed by averaging the execution time of 𝜏𝑖 across each GWO excels in exploitation, offering a solution to WOA’s challenges machine, depicted as: through two strategies: preserving the best solution per iteration and 𝑓 (𝑤̂ 𝑖 ) = 𝑎𝑣𝑟(𝑤̂ 𝑌1 , 𝑤̂ 𝑌2 , … , 𝑤̂ 𝑌|𝑌 | ) (36) evaluating new solutions with the best one during exploration. If the outcome improves upon the best solution, agents’ positions change; oth- 2. Alternatively, it can be derived from the best-case scenario. erwise, they remain unchanged. Additionally, to enhance the HWWO algorithm, authors proposed integrating mutation operators into WOA 𝑓 (𝑤̂ 𝑖 ) = min(𝑤̂ 𝑌1 , 𝑤̂ 𝑌2 , … , 𝑤̂ 𝑌|𝑌 | ) (37) and combining it with GWO. Mapping tasks to multiprocessors poses a significant NP-hard chal- 3. Task weighting based on the worst-case scenario. lenge. Consequently, the hybrid metaheuristic HWWO scheduling tech- 𝑓 (𝑤̂ 𝑖 ) = max(𝑤̂ 𝑌1 , 𝑤̂ 𝑌2 , … , 𝑤̂ 𝑌|𝑌 | ) (38) nique is employed as a strategic solution. This approach unfolds across two distinct phases: initial task prioritization, wherein tasks are se- Each of these schemes results in a unique task ordering. For instance, quenced in descending order of priorities, and subsequent task alloca- when applied to the DAG example shown in Fig. 4, they produce differ- tion to suitable processors. ent upward rank lists for tasks. The most optimal ranking is achieved Phase 1 by the first scheme, the HEFT algorithm, with the corresponding values for the tasks depicted in Fig. 4 summarized in Table 4. 5.1. Prioritizing tasks and deadline determination Upon employing the DVR HEFT technique to generate an optimized upward rank list for the tasks, the next step involved calculating the To calculate the lower bound for tasks scheduling, this study uti- computation time required to allocate each task to the available pro- lizes the heterogeneous earliest-finish-time (HEFT) algorithm proposed cessors. This allocation process followed the methodology outlined in by [86]. The HEFT algorithm is chosen due to its proven effective- the HEFT algorithm. The task with the highest rank value is identified ness in generating high-quality schedules for heterogeneous computing and then assigned to the processor that could complete its execution systems, making it a reliable choice for this purpose. Obtaining an at the earliest possible finish time (EFT). To determine the earliest exact lower bound is a challenging endeavor, so the scheduling length feasible execution time for a given task 𝜏𝑘 on processor 𝑌𝑙 , the values estimated by the HEFT method is adopted as the standard. It is further for the 𝐸𝑆𝑇 (𝜏𝑘 , 𝑌𝑙 ) (earliest start time) and 𝐸𝐹 𝑇 (𝜏𝑘 , 𝑌𝑙 ) are calculated assumed that the application’s deadline constraint is not known until recursively as follows: after this lower bound reference has been determined. ⎧𝐸𝑆𝑇 (𝜏 𝑒𝑛𝑡𝑟𝑦,𝑌𝑙 ) = 0; The strategic allocation of tasks stands as a pivotal challenge in ⎪ ⎪ ⎧ the context of (DAG) list scheduling within heterogeneous distributed ⎨ ⎪𝑎𝑣𝑎𝑖𝑙[𝑙], (39) systems. In an endeavor to tackle this challenge, the present article ⎪ 𝐸𝑆𝑇 (𝜏 , 𝑘 𝑙𝑌 ) = 𝑚𝑎𝑥 ⎨ 𝑚𝑎𝑥 {𝐴𝐹 𝑇 (𝜏 ) + 𝑐̂ } ; adopts the dynamic variant rank HEFT algorithm (DVR HEFT), as intro- ⎪ ⎪𝜏𝑖 ∈𝑝𝑟𝑒𝑑(𝜏𝑘 ) 𝑖 𝑖,𝑘 ⎩ ⎩ duced by [90]. As per the findings elucidated by [90], the DVR HEFT and 𝐸𝐹 𝑇 (𝜏𝑘 , 𝑌𝑙 ) = 𝐸𝑆𝑇 (𝜏𝑘 , 𝑌𝑙 ) + 𝑤̂ 𝑘,𝑙 (40) algorithm demonstrates enhanced performance over its conventional counterpart, HEFT, by yielding superior outcomes while concurrently The term 𝑎𝑣𝑎𝑖𝑙[𝑙] denotes the earliest moment when processor 𝑌𝑙 is mitigating time complexity. By utilizing this improved task prioritiza- ready to execute a task, while 𝐴𝐹 𝑇 (𝜏𝑖 ) refers to the actual finish time of tion approach, the DVR HEFT algorithm aims to enhance the efficiency task 𝜏𝑖 . If tasks 𝜏𝑖 and 𝜏𝑘 are allocated to the same processor, the vari- of scheduling interdependent tasks across heterogeneous computing able 𝑐̂𝑖,𝑘 is assigned a value of zero. The makespan of the application de- resources within a distributed system. In its initial phase, akin to other fines the exact completion time of task 𝜏𝑒𝑥𝑖𝑡 , accounting for the schedul- static algorithms, the DVR HEFT algorithm undertakes the computation ing of all tasks within a DAG. As previously described, the process to of task priorities. Within the DVR HEFT framework, this entails metic- compute the lower bound of application G unfolds as follows: ulously establishing task priorities through a comprehensive evaluation 𝐿𝐵(𝐺) = 𝐿𝐵(𝜏𝑒𝑥𝑖𝑡 ) (41) of their upward rank values, following a procedure similar to that of the HEFT algorithm. The upward rank, denoted as 𝑅𝑎𝑛𝑘𝑈 (𝜏𝑖 ), for a given Therefore, the relative deadline can be fulfilled. For the illustrated task 𝜏𝑖 , is recursively determined through the following equation: example in Fig. 4, the application’s deadline is considered as the sum { } of its lower bound and 20 [86]. 𝑅𝑎𝑛𝑘𝑈 (𝜏𝑖 ) = 𝑓 (𝑤̂ 𝑖 ) + max [𝑎𝑣𝑟(𝑐̂𝑖,𝑘 ) + 𝑅𝑎𝑛𝑘𝑈 (𝜏𝑘 )] (35) The comparative scheduling results for the specified DAG illustrated 𝜏𝑘 ∈𝑠𝑢𝑐𝑐(𝜏𝑖 ) in Fig. 4 have been determined using a variety of algorithms, including where, the symbol 𝑤̂ 𝑖 represents the 𝑊̂ 𝐶𝐸𝑇 of task 𝜏𝑖 . The task weight HEFT [86], DECM [37], energy-aware processor merging (EPM) [91], value, produced by the function 𝑓 (𝑤̂ 𝑖 ), is contingent upon the task’s reliability enhancement under energy and response time constraints execution duration on each processor whereas 𝑐̂𝑖,𝑘 signifies the 𝑊̂ 𝐶𝑅𝑇 (REREC) [38], and energy-efficient scheduling with a reliability goal between the tasks 𝜏𝑖 and 𝜏𝑘 . (ESRG) [8]. Assessing energy consumption and reliability involves ref- In a heterogeneous computing environment, the time required to erencing the power parameters listed in Table 5 for all processors. Each execute a task can fluctuate based on the capabilities and performance processor’s energy-efficient frequency, denoted as 𝑓𝑒𝑒 , is calculated characteristics of the specific machine handling that task. As a result, in based on Eq. (19), while the maximum frequency, 𝑓𝑚𝑎𝑥 , for each proces- such heterogeneous settings, there exist multiple distinct methodologies sor is considered to be 1.0, as indicated in previous studies [8,91]. The to calculate the computational weight associated with each node or schedule generated by employing the HEFT algorithm at its maximum task. The approach chosen to determine a node’s weight 𝑤̂ 𝑖 could frequency level is graphically presented in Fig. 5. The aggregate energy, potentially optimize computation time in certain scenarios, however, denoted as 𝐸̂ 𝑡𝑜𝑡𝑎𝑙 (𝐺), and the overall reliability, represented as 𝑅𝑒 (𝐺), it does not guarantee improvements across all cases. As a result, the resulting from the execution of the HEFT algorithm, are quantified as researchers in [90] proposed three distinct methods for calculating the 170.52 and 0.880426, respectively [91]. The visual depictions of the upward rank values assigned to tasks. These alternative schemes for scheduling outcomes for the other algorithms are illustrated in Figs. determining task priorities are described as follows: 6–9. It is worth highlighting that the processors displayed with shading 10 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 4 𝑅𝑎𝑛𝑘𝑈 values for tasks illustrated in Fig. 4. Tasks 𝜏1 𝜏3 𝜏2 𝜏4 𝜏5 𝜏6 𝜏7 𝜏8 𝜏9 𝜏10 𝑅𝑎𝑛𝑘𝑈 133.19 118.19 115.86 114.19 101.53 87.27 70.86 59.86 44.36 14.7 Fig. 5. Scheduling result of the DAG shown in Fig. 4 using the HEFT technique. Fig. 6. Scheduling result of the DAG shown in Fig. 4 using the DECM technique. Fig. 7. Scheduling result of the DAG shown in Fig. 4 using the EPM technique. 11 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 8. Scheduling result of the DAG shown in Fig. 4 using the REREC technique. Fig. 9. Scheduling result of the DAG shown in Fig. 4 using the ESRG technique. Table 5 employing the DVR HEFT technique. This method aids in reducing Power parameters of processors (𝑌1 , 𝑌2 , 𝑎𝑛𝑑𝑌3 ). the static energy consumed. This section proposes a novel algorithmic 𝑌𝑙 𝑃𝑙,𝑠 𝐶𝑙,𝑒𝑓 𝑃𝑙,𝑖𝑛𝑑 𝑚𝑙 𝑓𝑙,𝑙𝑜𝑤 𝑓𝑙,𝑚𝑎𝑥 𝜆𝑙,𝑚𝑎𝑥 approach that leverages the DVFS-integrated EES technique. This algo- 𝑌1 0.3 0.8 0.06 2.9 0.33 1.0 0.0005 rithm incorporates a HWWO model, featuring an insert-reversed block 𝑌2 0.2 0.7 0.07 2.7 0.29 1.0 0.0002 operation and a mutation operator. The primary goal is to efficiently 𝑌3 0.1 1.0 0.07 2.4 0.29 1.0 0.0009 assign tasks to appropriate processors within computational environ- ments, thereby achieving substantial reductions in energy consumption while simultaneously enhancing the overall reliability of the system. in the figures signify their inactive or idle status during the execution The mutation operator’s main role is to diversify the population and of the respective schedules. boost HWWO’s global exploration. This study employs the inversion The tasks displayed in blue indicate a higher execution frequency. mutation operator to fulfill this function. Initially, all processors shown in Fig. 6 and processors 𝑌1 and 𝑌2 illustrated in Fig. 7 become active. Following that, every processor 5.2.1. Termination criteria depicted in Fig. 8 is activated. Afterward, Fig. 9 demonstrates the Optimization algorithms must have proper stopping conditions to activation of processors 𝑌1 and 𝑌2 specifically. ensure they do not run indefinitely. The algorithm iterates until conver- The total energy consumption when employing the DECM algorithm gence to find the best solution, making it essential to set termination (depicted in Fig. 6) is lower than other algorithms, followed by REREC criteria to assess the optimization process’s convergence. In this case, and ESRG algorithms. Crucially, this technique also enhances the sys- the algorithm is designed to carry out 20 optimization iterations ini- tem’s reliability compared to these other methods. By analyzing Figs. tially, before assessing any convergence criteria. Subsequent to this 5–9, it becomes evident that the DECM technique delivers superior per- initial phase, it examines two criteria: formance concerning both energy efficiency and reliability, surpassing the EPM, HEFT, REREC, and ESRG algorithms. i The first criterion measures the change in the fitness function Phase 2 between the latest optimization iteration 𝑓𝑓 𝑛 (𝑟) and the previous iteration 𝑓𝑓 𝑛 (𝑟 − 1). When the relative change is less than a 5.2. DVFS-integrated hybrid WOA-GWO scheduling model specified tolerance 𝜖𝑓 e.g., 𝜖𝑓 = 0.001 or 0.1%, the stop criterion is met. This stop criterion is defined as: The section before this one explains the task prioritization pro- 𝑓𝑓 𝑛 (𝑟) − 𝑓𝑓 𝑛 (𝑟 − 1) cess, where tasks are arranged in descending order of priorities by < 𝜖𝑓 (42) 𝑓𝑓 𝑛 (𝑟) 12 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 10. Inversion mutation operator. ii If the above criterion is not satisfied, the optimization process Algorithm 1 Pseudo code for the insert-reversed block operation halts after reaching the maximum number of iterations, set here ( ) |𝑋| to 500. 1: temp = random 2, ; 2 2: arr positions [temp]; 5.2.2. Scheduling fitness function 3: function replace(search_agent, i) HWWO integrates metaheuristic techniques for tasks scheduling, 4: for j = 0 to temp - 1 do with the effectiveness contingent upon the crafted fitness function 5: search_agent[j] = switch(i, positions[value - j - 1]); determining suitable computing resources. Eq. (43) outlines the formu- 6: end for lation of the fitness function for each particle in HWWO. 7: return search_agent; [ |𝑌 | |𝑋| ( )] 8: end function ∑ 𝑀𝑆(𝐺) ∑ −𝜆𝑙,𝑚𝑎𝑥 ∗ 𝑤̂ 𝑖,𝑙 ∗𝑓𝑙,𝑚𝑎𝑥 𝑓𝑓 𝑛(𝜏𝑖 ,𝑌𝑙 ,𝑓𝑙,ℎ ) = 𝑚𝑖𝑛 𝜎1 ∗ 𝑃𝑙,𝑠 ∗ + 𝜎2 ∗ 𝑒 𝑓𝑙,ℎ (43) 9: function insert_reversed_block_operation(assignment) 𝐷𝐿(𝐺) 𝑙=1 𝑖=1 10: matrix permutations[|𝑋| − 1][|𝑋|]; Here, 𝜎1 and 𝜎2 signify the optimal weights for energy consumption 11: positions = sorted(random(0, |𝑋| - 1, temp)); ⊳ Select ‘temp‘ and computation time, subject to 𝜎1 + 𝜎2 = 1. During selection, each number of positions of tasks. solution’s objectives receive random weights, encouraging exploration 12: permutations[0] = assignment; in various directions. 13: for i = 0 to |𝑋| - 3 do 14: if i not in positions then 5.2.3. Inversion mutation 15: permutations[i + 1] = replace(permutations[i], i); Inversion mutation [92], an operator in evolutionary algorithms, 16: end if randomly selects a segment of genes within a chromosome and reverses 17: end for their order. This process fosters genetic diversity by exploring different 18: end function regions of the search space, potentially leading to improved solu- tions. This can be understood by referring to the illustrative example presented in Fig. 10. formulating an effective HWWO approach for tackling optimization challenges lies in the intricate process of developing an appropriate en- 5.2.4. Insert-reversed block operation coding mechanism to model the constituent elements or search agents Excessive mutation can disrupt previously found good solutions and within the HWWO framework. For optimization scenarios involving |𝑌 | impede the algorithm’s convergence toward optimal or near-optimal processors, a viable strategy to model the constituent search particles solutions. This occurs because random changes introduced by mutation is through the utilization of |𝑌 |-dimensional coordinate vectors, as may not always improve the solutions. Consequently, after applying the illustrated in the subsequent representation. inversion mutation operator, the insert-reversed block operation [93] is integrated into the algorithm. This operation inserts a reversed block of → → → → task permutation into all (|𝑋| − 1) conceivable positions within a |𝑋|- 𝑋 = (𝑋 1 , 𝑋 2 , … , 𝑋 |𝑌 | ) dimensional search agent, as outlined in Algorithm 1 and illustrated in Fig. 11. Step 2: Evaluating fitness values for each particle in assignment. This step calculates the fitness score for each HWWO particle, 5.2.5. HWWO scheduling algorithm reflecting the task-to-processor assignment. The hybrid WOA–GWO The pseudocode in Algorithm 2 outlines the procedure for tasks framework evaluates individual particle fitness using the function de- scheduling employing the HWWO technique. A more in-depth eluci- fined in Eq. (43). When a particle’s current fitness exceeds its previous dation of these steps is presented in the following paragraphs: best, the global best fitness is updated to the new higher value. Step 1: Whale-wolf encoding and position vector initialization. Step 3: Updating the position vector. The developed technique views tasks as elements within the HWWO framework, which systematically refines their designated positions over The proposed HWWO approach updates the position of each whale successive iterations, ultimately converging on an optimal task allo- using equations ((2), (8), (10)) under different scenarios, whereas cation among the available processors. One of the key obstacles in Eq. (17) facilitates the positional update of the wolf for every iteration, 13 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 11. Example demonstrating the insert-reversed block operation. → → → taking into consideration the values of 𝑍 𝛼 , 𝑍 𝛽 , and 𝑍 𝛿 as defined by assignment, carried out between lines 19–21, has a time complexity Eqs. (13)–(16). of 𝑂(|𝑌 |). The iterative loop spanning steps 33 to 63 also requires a time complexity scaling as 𝑂(|𝑌 | ∗ |𝑋|). As a result, the overall time Step 4: Emergence of optimal scheduling. complexity per iteration of the HWWO algorithm is the cumulative sum The algorithm models task assignment using whales and wolves. Ap- of these individual complexities, amounting to 𝑂(|𝑋| ∗ |𝑌 |) + 𝑂(|𝑌 |). plying mutated and permutated HWWO, particle positions are adjusted To store the assignments of |𝑋| tasks across |𝑌 | processors, a matrix for optimal tasks scheduling on processors. With optimal scheduling structure of size |𝑌 | ∗ |𝑋| is required, and an array of length |𝑌 | is obtained, the EES algorithm then optimizes voltage and frequency needed to hold the fitness value of each assignment. Consequently, distribution among all processors. the overall space complexity for representing and evaluating these Step 5: Calculating total energy consumption, and reliability. assignments is 𝑂(|𝑋| ∗ |𝑌 |) + 𝑂(|𝑌 |). The aggregate energy consumption of the resulting set of particles Alternatively, the algorithms based on energy considerations, such can be determined by utilizing Eq. (25), while their reliability metric as ESRG, EPM, and REREC techniques, exhibit time complexities of is ascertained through the application of Eq. (29). 𝑂(|𝑋|2 ∗ |𝑌 |), 𝑂(|𝑋|2 ∗ |𝑌 |3 ), and 𝑂(|𝑋|2 ∗ |𝑌 |), respectively. It is The outlined steps should be iteratively executed until the stopping noteworthy that, in terms of time complexity, the proposed algorithm criteria or termination condition is met. The following provides a surpasses the performance of the other existing algorithms in this comprehensive description of Algorithm 2. domain. Fig. 12 visually depicts the scheduling outcomes obtained with the proposed HWWO algorithm, and Fig. 13 illustrates the flowchart of 6. Experimental evaluation the proposed model. The algorithm effectively reduces total energy consumption and enhances reliability. Using Eq. (25), the total energy This study aims to develop an approach that improves system consumption 𝐸̂ 𝑡𝑜𝑡𝑎𝑙 (𝐺) is calculated as 75.673 units, with 𝐸̂ 𝑠 (𝐺) being reliability, reduces the overall computation time, and conserves energy 60.6 units and 𝐸̂ 𝑑 (𝐺) being 15.073 units. Furthermore, to satisfy the usage. The proposed model’s performance in handling tasks scheduling reliability goal 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) i.e., 𝑅𝑒(𝑚𝑖𝑛) (𝐺) ≤ 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) ≤ 𝑅𝑒(𝑚𝑎𝑥) (𝐺), the across heterogeneous distributed systems has been rigorously evaluated maximum reliability 𝑅𝑒(𝑚𝑎𝑥) (𝐺) and the minimum reliability 𝑅𝑒(𝑚𝑖𝑛) (𝐺) through a comprehensive set of experiments. This section outlines the for the DAG are evaluated as 0.99989 and 0.89738 respectively. Here, simulations conducted and the evaluation metrics configured to address the reliability goal, 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺), is set at 0.95, and the obtained reliabil- the stated problem using the proposed algorithm. An in-depth analysis ity, 𝑅𝑒 (𝐺), stands at 0.978526 > 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺). The improvement is clear of each experiment is provided in the following subsections. when contrasted with the energy consumption and reliability results shown in Figs. 5–9, representing the HEFT, DECM, EPM, REREC, and 6.1. Experimental metrics ESRG algorithms. The results for the DAG, as depicted in Fig. 4, clearly highlight the The HWWO algorithm’s efficacy in distributed computing is com- superiority of the proposed algorithm in reducing energy consumption prehensively evaluated using multiple performance metrics for an- and enhancing system reliability. alytical assessment. These metrics cover total energy consumption, system reliability, computation time, resource utilization, SLR, CCR, 5.2.6. Time and space complexities and sensitivity analysis. This comprehensive assessment aims to provide The computational time complexities required by the WOA and the insights into the algorithm’s performance and its potential impact on GWO expressed as 𝑂(|𝑋| ∗ |𝑌 |). In the proposed HWWO technique, the distributed computing environment. the time taken for steps 14–17, during which the mutation operation With regard to performance evaluation, the proposed algorithm’s is performed on each assignment, exhibits an identical complexity of capabilities are benchmarked through two distinct stages. Firstly, a 𝑂(|𝑋| ∗ |𝑌 |). Moreover, the evaluation of the fitness function for each comparative analysis is conducted between the introduced HWWO 14 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 12. Best scheduling result of the DAG shown in Fig. 4 using proposed HWWO technique. Fig. 13. Flowchart of the proposed HWWO model. technique and several existing state-of-the-art methods, namely HEFT, is carried out across multiple scenarios, generated by varying the DECM, EPM, REREC, and ESRG. Additionally, the successful state- number of tasks over different generations, to provide a comprehensive of-the-art algorithms from the ‘CEC2020 competition on real-world understanding of the algorithm’s performance under diverse conditions. single objective constrained optimization’ – specifically, SASS [94], The assessment of the presented benchmark suite is conducted on a sCMAgES [95], EnMODE [96], and COLSHADE [97] – are incorporated personal computer equipped with the Microsoft Windows 11 operating as four benchmark algorithms for comparative evaluation in the con- system, featuring an INTEL Core i3 CPU and 8 GB of RAM. text of real-world optimization challenges, as outlined in the relevant Stage I literature [98]. Furthermore, the proposed methodology is evaluated against several metaheuristic approaches, including PSO, GWO, ACO, 6.2. Benchmark analysis with state-of-the-art algorithms KH, WOA, DA (dragonfly algorithm) [67], and AHA (artificial hum- mingbird algorithm) [67], using various performance metrics. The The subsection delves into an assessment of the proposed algo- algorithm’s capabilities are also assessed through a series of benchmark rithm’s effectiveness, drawing comparisons with state-of-the-art meth- tests involving unimodal test functions, which are compared against the ods across various performance metrics. The evaluation encompasses aforementioned metaheuristic techniques. This comparative analysis three distinct scenarios, wherein the validation process incorporates FFT, GE, and constrained optimization challenges from the renowned 15 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Algorithm 2 The HWWO tasks scheduling algorithm 52: assignments[𝑖] = inversion_mutation(assignments[𝑖], 0.2); Input: 53: assignments[𝑖] = insert_reversed_block_operation Dataset 𝑊̂ 𝐶𝐸𝑇 No. of processors, |𝑌 | (assignments[𝑖]); No. of tasks, |𝑋| 54: end( for ) 𝑓𝑓 𝑛 (𝑟)−𝑓𝑓 𝑛 (𝑟−1) Population size, |𝑌 | 55: if < 𝜖𝑓 (= 0.001) then → 𝑓𝑓 𝑛 (𝑟) Control coefficient, 𝜇 56: 𝑟 = (max_iteration); Maximum no. of iterations, 𝑟 57: else Output: Global best solution (best tasks assignment) 58: 𝑟 = 𝑟 + 1; Begin 59: end if 1: matrix assignments[|𝑌 |][|𝑋|] = randomly assign |𝑋| tasks in the 60: for 𝑖 = 0 to |𝑌 | − 1 do processors prioritizing on the basis of rank; 61: fitness[𝑖] = fitness(assignments[𝑖]); 2: tasks = order by (rank, decreasing); 62: end for 3: function fitness(assignment)[ ( )] 63: end while 𝑤̂ ∗𝑓 ∑|𝑌 | ∑|𝑋| −𝜆𝑙,𝑚𝑎𝑥 ∗ 𝑖,𝑙 𝑓𝑙,ℎ𝑙,𝑚𝑎𝑥 64: best_assignment = assignment with least fitness value; 4: 𝑓𝑓 𝑛(𝜏𝑖 ,𝑌𝑙 ,𝑓𝑙,ℎ ) = 𝑚𝑖𝑛 𝜎1 ∗ 𝑙=1 𝑃𝑙,𝑠 ∗ 𝑀𝑆(𝐺) 𝐷𝐿(𝐺) + 𝜎 2 ∗ 𝑖=1 𝑒 ; 65: Implement the EES algorithm to optimize voltage and frequency 5: return 𝑓𝑓 𝑛(𝜏𝑖 ,𝑌𝑙 ,𝑓𝑙,ℎ ) ; distribution among all processors 6: end function 7: function inversion_mutation(assignment, mutation_rate) 8: if 𝑟𝑎𝑛𝑑𝑜𝑚(0, 1) < 𝑚𝑢𝑡𝑎𝑡𝑖𝑜𝑛_𝑟𝑎𝑡𝑒 then Table 6 9: 𝑖, 𝑗 = sorted(random(|𝑋|, 2)); Parameter setting for stage I. 10: assignment[𝑖: 𝑗 + 1] = assignment[𝑖: 𝑗 + 1][:: -1]; Parameter Values 11: end if 𝑤̂ 𝑖,𝑙 (𝑚𝑠) [10, 100] 12: return assignment; 𝑐̂𝑖,𝑘 (𝑚𝑠) [10, 100] 13: end function 𝑃𝑙,𝑠 [0.1, 0.5] 14: for 𝑖 = 0 to |𝑌 | − 1 do 𝑃𝑙,𝑖𝑛𝑑 [0.03, 0.07] 15: assignments[𝑖] = random assignment to processors (tasks); 𝐶𝑙,𝑒𝑓 [0.8, 1.2] 16: assignments[𝑖] = inversion_mutation(assignments[𝑖], 0.2); 𝑚𝑙 [2.5, 3.0] 17: end for 𝑓𝑙,𝑚𝑎𝑥 1 GHz 18: arr fitness_values[𝑌 ]; 𝑙1 and 𝑙2 [0, 1] → 19: for 𝑖 = 0 to |𝑌 | − 1 do 𝜇 [0, 2] 𝜆𝑙,𝑚𝑎𝑥 [0.0003, 0.0009] 20: fitness_values[𝑖] = fitness(assignments[𝑖]); 𝐶𝐶𝑅 0.1, 0.5, 1, 5, 10 21: end for → 22: 𝑍[0] = alpha assignment; → 23: 𝑍[1] = beta assignment; → CEC2020 competition. Complementing these scenarios, a diverse array 24: 𝑍[3] = delta assignment; → → → of experiments centered around tasks scheduling in a multiprocess- 25: function wolf(𝑀 ′ , 𝜇, 𝜉 , 𝜁 ) ing environment is conducted. The validation outcomes underscore 26: matrix wolf_particles[3][|𝑋|]; the algorithm’s efficacy. It is noteworthy that each algorithm un- 27: for 𝑖 = 0 to 2 do → → dergoes independent iterations, refining its functions in pursuit of 28: 𝛶 = | 𝜉 ∗ 𝑍[𝑖] − 𝑀 ′ |; → → → optimal performance. The comprehensive evaluation aims to under- 29: 𝑤𝑜𝑙𝑓 _𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠[𝑖] = 𝑍[𝑖] − 𝜁 ∗ 𝛶 ; stand the algorithm’s capabilities in addressing real-world optimization 30: end for and scheduling challenges. The validation of the algorithm is facili- 31: return 𝑤𝑜𝑙𝑓 _𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠[0]+𝑤𝑜𝑙𝑓 _𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠[1]+𝑤𝑜𝑙𝑓 _𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠[2] ; 3 tated through simulations conducted within a replicated heterogeneous 32: end function 33: while 𝑟 < max_iteration do distributed embedded system environment, comprising 95 processors → ( ) 34: 2 𝜇 = 2 − 𝑟 ∗ 𝑚𝑎𝑥_𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 ; capable of handling tasks of varying complexities. These processors 35: 𝑏 = random constant; exhibit diverse processing capabilities, with their specifications and 36: update best assignments on the basis of least fitness value; the corresponding application parameters closely mirroring the details 37: for 𝑖 = 0 to |𝑌 | − 1 do outlined in Refs. [8,91]. The input data parameters employed in this → → 38: update 𝑟, 𝜁 , 𝜉 , 𝜈, p; stage are delineated in Table 6, wherein each frequency magnitude un- 39: if 𝑝 < 0.5 then dergoes discretization and is represented with a precision of 0.01 GHz. → → 40: 𝐷 = | 𝜉 ∗ 𝑏𝑒𝑠𝑡_𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡 − 𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡𝑠[𝑖]|; Following the simulations, a comprehensive evaluation is undertaken, → wherein various assessment metrics are computed, including the av- 41: if | 𝜁 | < 1 then → → erage execution time, as well as the standard deviation and mean 42: assignments[𝑖]=|𝑏𝑒𝑠𝑡_𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡 − 𝜁 ∗ 𝐷|; of solutions across the iterative process, as described in [98]. This 43: else evaluation replicates real-world heterogeneous distributed embedded 44: 𝑗 = random(0, |𝑌 |); → → systems, providing insights into the algorithm’s performance. 45: 𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡𝑠[𝑖] = 𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡𝑠[𝑗] − 𝜁 ∗ 𝐷; 46: end if 47: else 6.2.1. Scenario 1 → 48: 𝐷′ = |𝑏𝑒𝑠𝑡_𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡 − 𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡𝑠[𝑖]|; This scenario conducts a rigorous assessment of the efficacy of the → 49: 𝑀 ′ = 𝐷′ ∗ 𝑒𝑏𝜈 ∗ 𝑐𝑜𝑠(2𝜋𝜈) + 𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡𝑠[𝑖]; HWWO-based technique through an examination of 15 real-world op- → → → 50: assignments[𝑖] = wolf(𝑀 ′ , 𝜇, 𝜁 , 𝜉 ); timization problems. The assessment utilizes four algorithms identified 51: end if in the ‘CEC2020 competition on real-world single objective constrained optimization’, namely the SASS algorithm, the sCMAgES algorithm, the EnMODE, and the COLSHADE algorithm. The article leverages these algorithms to evaluate the performance of the HWWO-based 16 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 7 15 real-world constrained optimization problems. Problem Name D g h 𝐶𝑜𝑝𝑡1 Optimal power flow (minimization of active power loss) 126 0 116 𝐶𝑜𝑝𝑡2 Topology optimization 30 30 0 𝐶𝑜𝑝𝑡3 Process flow sheeting problem 3 3 0 𝐶𝑜𝑝𝑡4 Gas transmission compressor design (GTCD) 4 1 0 𝐶𝑜𝑝𝑡5 SOPWM for 3-level inverters 25 24 1 𝐶𝑜𝑝𝑡6 Optimal power flow (minimization of fuel cost) 126 0 116 𝐶𝑜𝑝𝑡7 Optimal power flow (minimization of active power loss and fuel cost) 126 0 116 𝐶𝑜𝑝𝑡8 SOPWM for 5-level inverters 25 24 1 𝐶𝑜𝑝𝑡9 Pressure vessel design 4 4 0 𝐶𝑜𝑝𝑡10 Optimal sizing of distributed generation for active power loss minimization 153 0 148 𝐶𝑜𝑝𝑡11 Wind farm layout problem 30 91 0 𝐶𝑜𝑝𝑡12 Microgrid power flow (islanded case) 76 0 76 𝐶𝑜𝑝𝑡13 Optimal setting of droop controller for minimization of active power loss in islanded microgrids 86 0 76 𝐶𝑜𝑝𝑡14 Microgrid power flow (grid-connected case) 74 0 74 𝐶𝑜𝑝𝑡15 Optimal setting of droop controller for minimization of reactive power loss in islanded microgrids 86 0 76 technique on the selected real-world optimization problems. The ar- ticle presents a detailed description of the attributes that define the real-world problems under investigation, including their dimensions and the number of equality and inequality constraints involved. This information is comprehensively outlined in Table 7. To ensure a fair and consistent comparison, the parameters of the four algorithms are maintained at their original settings as documented in the relevant literature [94–97]. To ensure an impartial evaluation, the proposed algorithm is executed for 500 iterations, adhering to the guidelines outlined in [98]. This ensures an equal number of function evaluations across methods. Subsequently, a statistical analysis is conducted using the Wilcoxon signed-rank test [99] to assess the HWWO’s performance relative to the other algorithms under consideration. The simulation outcomes for 15 real-world challenges are show- cased in Table 8, which provides comprehensive information regarding the best fitness value, mean fitness value, worst fitness value, and the standard deviation (St. Dev) of the fitness values. The results displayed in Table 8 demonstrate the proficiency of the HWWO algorithm in tackling the majority of these problems, exhibiting commendable performance. Notably, some of the solutions obtained Fig. 14. DAG of FFT application with 𝜌 = 4. by HWWO surpass those achieved by competing algorithms. To facil- itate a comprehensive comparison of algorithm performance on the proposed benchmark suite, we have adopted the ranking methodology outlined in the CEC2020 competition [98]. The evaluation process benchmarks (𝑝 < 0.05). These findings suggest that the HWWO algo- assigns weighted scores to the best, mean, and median results obtained rithm provides faster convergence and improved accuracy in solving from 25 independent runs of each algorithm to quantify their perfor- optimization problems, which could translate into better outcomes in mance, as outlined in [98]. The weighted performance measures (PM) practical applications such as logistics and scheduling optimization. are: HWWO (0.321089, rank 1), SASS (0.335719, rank 2), EnMODE (0.351856, rank 3), sCMAgES (0.415387, rank 4), and COLSHADE 6.2.2. Scenario 2 (0.493992, rank 5). Outperforming the others, HWWO demonstrates its In this scenario, the proposed approach rigorously evaluates the effectiveness in handling diverse real-world problems with acceptable effectiveness of the HWWO technique through an analysis of the FFT performance. algorithm. The visual depiction, illustrated in Fig. 14, showcases a parallel implementation of the FFT application [8,86], incorporating The results of the Wilcoxon signed-rank test, presented in Table a crucial parameter value of 𝜌 = 4. The parameter ′ 𝜌′ governs the 9, highlight the effectiveness of the proposed HWWO algorithm when application’s task count |𝑋| via |𝑋| = (2 ∗ 𝜌 − 1) + 𝜌 ∗ 𝑙𝑜𝑔2 (𝜌), and compared to other methods. The table shows the ranks of the HWWO is exponentially related to an integer ′ 𝑦′ through 𝜌 = 2𝑦 . As illustrated algorithm relative to the second algorithm in terms of best fitness in Fig. 14, the application achieves a task count of |𝑋| = 15, which values, with 𝑇 + , 𝑇 − , and 𝑇 indicating the statistical results. 𝑇 + repre- occurs when 𝜌 is set to 4. The following are three experiments evaluated sents the superiority of the HWWO algorithm. The p-values, calculated utilizing the FFT application. at a 5% significance level, test the null hypothesis that the median difference between the algorithms is zero. Additionally, the final row of Experiment 1: This experimental work aims to conduct a comprehen- Table 9 summarizes the counts of 𝑇 + and 𝑇 − , as well as the test statistic, sive comparative evaluation of the proposed HWWO technique against offering a clear overview of the results. The Wilcoxon signed-rank test is existing algorithms. The primary focus is to assess their respective particularly suited for paired comparisons in situations where normality performances concerning total energy consumption and computational cannot be assumed, making it a reliable tool for validating performance time requirements within the context of parallel applications involving differences in real-world optimization problems. The results indicate FFT. This experimental study employs a rigorous deadline and reliabil- significant improvements in the performance of the HWWO algorithm, ity constraints, expressed as 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4 and 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) = with 𝑇 + accounting for 86.96% and 𝑇 − for 13.03% of the evaluated 0.90 respectively, where the complexity of the parallel application is 17 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 8 Results of 15 real-world constrained optimization problems. Problem Performance Optimization Algorithm metric SASS COLSHADE EnMODE sCMAgES HWWO 𝐶𝑜𝑝𝑡1 Best fitness 5.13E+4 7.08E+4 7.10E+4 7.12E+4 5.19E+4 Mean fitness 5.33E+4 7.08E+4 7.22E+4 7.22E+4 5.37E+4 Worst fitness 5.51E+4 7.08E+4 7.61E+4 7.71E+4 5.66E+4 St. Dev 1.07E+2 2.12E−12 1.14E+2 2.14E+2 1.15E+2 𝐶𝑜𝑝𝑡2 Best fitness 3.06E+4 3.06E+4 3.06E+04 3.06E+04 3.05E+4 Mean fitness 3.06E+4 3.06E+4 3.08E+04 3.08E+04 3.06E+4 Worst fitness 3.06E+4 3.07E+4 3.08E+04 3.08E+04 3.06E+4 St. Dev 3.07E−11 2.51E−7 2.53E+1 2.51E+1 7.46E−1 𝐶𝑜𝑝𝑡3 Best fitness 3.68E+04 3.67E+4 3.67E+4 3.67E+4 3.63E+4 Mean fitness 3.68E+04 3.69E+4 3.67E+4 3.67E+4 3.63E+4 Worst fitness 3.68E+04 3.69E+4 3.67E+4 3.67E+4 3.63E+4 St. Dev 1.69E−15 7.51E+1 4.53E−15 4.53E−15 1.66E−16 𝐶𝑜𝑝𝑡4 Best fitness 5.29E+05 5.46E+05 5.32E+05 5.45E+05 5.26E+05 Mean fitness 5.29E+05 5.46E+05 5.35E+05 5.45E+05 5.28E+05 Worst fitness 5.31E+05 5.48E+05 5.36E+05 5.48E+05 5.28E+05 St. Dev 7.13E−4 2.36E−4 1.66E+2 3.31E−04 1.16E−1 𝐶𝑜𝑝𝑡5 Best fitness 1.60E+6 1.67E+06 1.34E+06 1.34E+6 1.34E+06 Mean fitness 1.67E+6 1.67E+06 1.34E+06 1.38E+6 1.34E+06 Worst fitness 1.67E+6 1.67E+06 1.36E+06 1.38E+6 1.36E+06 St. Dev 2.06E+4 1.05E−09 3.66E−7 5.14E+2 3.66E−7 𝐶𝑜𝑝𝑡6 Best fitness 6.85E+07 6.91E+7 6.85E+07 6.85E+07 6.85E+07 Mean fitness 6.87E+07 6.91E+7 6.87E+07 6.89E+07 6.87E+07 Worst fitness 6.89E+07 6.93E+7 6.89E+07 6.89E+07 6.89E+07 St. Dev 2.27E+04 6.61E−04 2.27E+04 5.42E+04 2.27E+04 𝐶𝑜𝑝𝑡7 Best fitness 5.73E+6 5.76E+6 5.73E+6 5.77E+6 5.76E+6 Mean fitness 5.74E+6 5.79E+6 5.74E+6 5.79E+6 5.77E+6 Worst fitness 5.76E+6 5.79E+6 5.76E+6 5.79E+6 5.77E+6 St. Dev 1.05E+4 6.05E+4 1.05E+4 3.08E+4 5.05E+3 𝐶𝑜𝑝𝑡8 Best fitness 9.93E−4 9.97E−04 8.92E−4 8.92E−4 8.92E−4 Mean fitness 9.95E−4 9.97E−04 8.92E−4 8.92E−4 8.95E−4 Worst fitness 9.96E−4 9.97E−04 8.96E−4 8.93E−4 8.95E−4 St. Dev 4.34E−6 5.41E−06 6.63E−4 4.31E−6 4.31E−6 𝐶𝑜𝑝𝑡9 Best fitness 1.89E+2 4.08E+03 4.08E+03 4.18E+03 1.89E+2 Mean fitness 1.91E+2 4.25E+03 4.31E+03 4.26E+03 1.91E+2 Worst fitness 1.99E+2 4.37E+03 4.37E+03 4.31E+03 1.99E+2 St. Dev 2.80E−1 8.85E+01 5.85E+01 1.55E+01 2.80E−1 𝐶𝑜𝑝𝑡10 Best fitness 6.16E−02 6.74E−02 6.26E−02 3.26E−02 3.26E−02 Mean fitness 6.96E−02 7.85E−02 7.59E−02 6.96E−02 6.96E−02 Worst fitness 7.92E−02 9.04E−02 9.23E−02 7.32E−02 7.32E−02 St. Dev 5.28E−02 5.26E−02 4.49E−05 4.28E−05 4.28E−05 𝐶𝑜𝑝𝑡11 Best fitness 3.06E+4 3.06E+4 3.11E+4 3.02E+4 2.94E+4 Mean fitness 3.08E+4 3.09E+4 3.11E+4 3.07E+4 2.94E+4 Worst fitness 3.11E+4 3.13E+4 3.13E+4 3.07E+4 2.94E+4 St. Dev 7.12E+1 2.61E+1 4.64E−4 4.64E+1 0 𝐶𝑜𝑝𝑡12 Best fitness 1.67E+1 1.67E+1 1.68E+1 1.72E+1 1.70E+1 Mean fitness 1.67E+1 1.68E+1 1.68E+1 1.74E+1 1.73E+1 Worst fitness 1.69E+1 1.69E+1 1.71E+1 1.74E+1 1.73E+1 St. Dev 2.03E−1 2.03E−1 3.08E−1 4.13E+1 2.03E−1 𝐶𝑜𝑝𝑡13 Best fitness 5.75E+2 5.71E+2 5.24E+2 5.57E+2 5.24E+2 Mean fitness 5.78E+2 5.79E+2 5.29E+2 5.59E+2 5.29E+2 Worst fitness 5.79E+2 5.79E+2 5.33E+2 5.61E+2 5.33E+2 St. Dev 2.51E+1 4.43E+1 1.61E+1 7.01E+1 1.61E+1 𝐶𝑜𝑝𝑡14 Best fitness 3.55E+2 3.62E+2 3.60E+2 3.55E+2 3.53E+2 Mean fitness 3.74E+2 3.78E+2 4.45E+2 3.61E+2 3.61E+2 Worst fitness 3.79E+2 3.79E+2 4.71E+2 3.66E+2 3.63E+2 St. Dev 8.01E+2 4.23E+2 7.32E+2 3.39E+1 3.37E+1 𝐶𝑜𝑝𝑡15 Best fitness 1.93E+5 1.89E+5 1.91E+5 1.89E+5 1.89E+5 Mean fitness 1.95E+5 1.91E+5 1.96E+5 1.89E+5 1.92E+5 Worst fitness 1.99E+5 1.97E+5 1.96E+5 1.89E+5 1.97E+5 St. Dev 1.32E+3 4.15E+3 5.80E+1 2.97E−11 3.35E+1 intrinsically linked to the quantity of constituent tasks it comprises. The the HWWO algorithm in comparison to other existing algorithms. As study deliberately varies |𝑋| from 95 (smaller scenarios) to 2559 (larger the task count rises, both the DECM and REREC algorithms yield scenarios), while concurrently investigating the effects of 𝜌 ranging comparable performance levels. Notably, up to a task value of 511, the from 16 to 256. ESRG algorithm stands out for its lower energy consumption compared to EPM. Beyond this threshold, EPM gradually refines its outcomes, Tables 10 and 11 present the outcomes from using FFT applications albeit at the expense of higher energy usage in contrast to DECM with varying 𝜌 values. In all experiments, HEFT (without DVFS tech- and REREC. The best outcomes, highlighted in bold text, are further nique) consistently consumes more energy. In Table 10, the parameter illustrated in Fig. 15, which visually represents the data from Table 10, |𝑋| demonstrates a spectrum of values ranging from 95 to 2559. This offering a comprehensive comparative analysis. underscores the superior energy consumption outcomes achieved by 18 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 9 Results of Wilcoxon signed-rank test of Table 8. Problem HWWO vs SASS HWWO vs COLSHADE HWWO vs EnMODE HWWO vs sCMAgES Rank Rank Rank Rank 𝐶𝑜𝑝𝑡1 7 4 3 4 𝐶𝑜𝑝𝑡2 2 1 1 1.5 𝐶𝑜𝑝𝑡3 6 10 9 9 𝐶𝑜𝑝𝑡4 4.5 5 10 3 𝐶𝑜𝑝𝑡5 10 7 – – 𝐶𝑜𝑝𝑡6 – 12 – – 𝐶𝑜𝑝𝑡7 4.5 – 6.5 1.5 𝐶𝑜𝑝𝑡8 1 2 – – 𝐶𝑜𝑝𝑡9 – 9 8 8 𝐶𝑜𝑝𝑡10 3 8 6.5 – 𝐶𝑜𝑝𝑡11 8 3 2 10 𝐶𝑜𝑝𝑡12 11 6 4.5 5.5 𝐶𝑜𝑝𝑡13 13 11 7 𝐶𝑜𝑝𝑡14 9 13 11 5.5 𝐶𝑜𝑝𝑡15 12 – 4.5 – p-value 0.0116 0.0452 0.0370 0.0352 𝑇 + = Sum of positive number ranks 68.5 85 55 55 𝑇 − = Sum of negative number ranks 22.5 6 11 0 𝑇 = min(𝑇 + , 𝑇 − ) 22.5 6 11 0 Table 10 Energy consumption analysis for FFT parallel applications across task configurations. |𝑋| Performance Algorithm metric HEFT DECM EPM REREC ESRG HWWO 95 Best 1.48E+4 2.73E+3 6.83E+3 3.07E+3 6.81E+3 1.68E+3 Mean 1.53E+4 2.80E+3 6.90E+3 3.26E+3 6.89E+3 1.72E+3 Worst 1.58E+4 2.83E+3 6.94E+3 3.33E+3 6.94E+3 1.83E+3 St. Dev 3.17E+3 5.78E+1 7.31E+1 4.13E+1 7.23E+1 3.14E+1 223 Best 2.53E+4 7.13E+3 8.43E+3 7.44E+3 8.39E+3 4.11E+3 Mean 2.61E+4 7.21E+3 8.47E+3 7.49E+3 8.47E+3 4.18E+3 Worst 2.67E+4 7.33E+3 8.56E+3 7.57E+3 8.54E+3 4.23E+3 St. Dev 6.20E+1 1.28E+1 6.16E+1 2.69E+2 3.80E+3 1.02E+1 511 Best 3.64E+4 1.32E+4 2.18E+4 1.27E+4 2.06E+4 6.58E+3 Mean 3.72E+4 1.32E+4 2.27E+4 1.35E+4 2.17E+4 6.61E+3 Worst 3.72E+4 1.32E+4 2.37E+4 1.46E+4 2.29E+4 6.77E+3 St. Dev 1.28E+1 3.47E−7 5.34E+1 7.28E+2 4.21E+2 5.54E+1 1151 Best 7.34E+4 3.26E+4 4.73E+4 3.26E+4 4.79E+4 9.75E+3 Mean 7.41E+4 3.35E+4 4.81E+4 3.37E+4 4.88E+4 9.79E+3 Worst 7.49E+4 3.43E+4 4.83E+4 3.43E+4 4.97E+4 9.79E+3 St. Dev 4.13E+4 2.13E+1 7.34E+2 5.81E+3 7.03E+2 2.34E+1 2559 Best 9.35E+4 6.17E+4 6.44E+4 6.21E+4 6.71E+4 4.73E+4 Mean 9.43E+4 6.28E+4 6.51E+4 6.37E+4 6.82E+4 4.86E+4 Worst 9.51E+4 6.39E+4 6.59E+4 6.47E+4 6.88E+4 4.93E+4 St. Dev 5.82E+4 3.72E+2 1.80E+2 4.63E+2 1.80E+2 7.37E+2 Within Table 11, it is notable that the EPM algorithm requires the specified reliability goals compared to other existing methods. significantly more 𝐶𝑇𝑇 𝐴 . However, as the number of tasks increases, Contrastingly, the HEFT, EPM, and ESRG algorithms exhibit an inability ESRG surpasses the other three algorithms in producing higher energy to fulfill the reliability constraints in the majority of scenarios. As values. The 𝐶𝑇𝑇 𝐴 of the newly proposed HWWO algorithm is projected the reliability objective escalates from 0.91 to 0.95, HEFT, EPM, and to occupy between 31.20% and 35.61% of the computational time ESRG manage to comply with the requirements, but struggle beyond required by the DECM and REREC algorithms. Regarding performance that range. Conversely, DECM, REREC, and HWWO successfully meet metrics, across a spectrum of values for |𝑋| from 95 to 2559, the 𝐶𝑇𝑇 𝐴 the reliability constraint within the range of 0.91 to 0.98, although of the HWWO closely mirrors that of the DECM algorithm, consistently none of the algorithms can fulfill the rigorous 0.99 requirement. It surpassing it. is noteworthy that if the upper bound for the reliability objective is established at an excessively elevated level, the maximum attainable Experiment 2: The study evaluates the reliability metrics and total reliability values for partial tasks may fall short of this upper bound in energy consumption of an extensive FFT application under varying practical implementation scenarios. reliability constraints. The experimental configuration involves 1151 Data presented in Table 12 has been shown graphically in Fig. 17. tasks, with 𝜌 = 128. Additionally, the reliability goal, 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺), is The table evaluates the energy consumption profiles of FFT applications systematically varied from 0.91 to 0.99 in increments of 0.01, enabling when subjected to varying reliability criteria. For reliability thresholds an assessment of the corresponding effects on reliability performance up to 0.98, the techniques DECM, REREC, and HWWO exhibit superior and energy utilization. energy consumption performance in comparison to HEFT, EPM, and The graphical representation in Fig. 16 illustrates the actual relia- ESRG. The algorithms HEFT, EPM, and ESRG are capable of producing bility values attained by the large-scale FFT application when subjected energy outcomes only up to a reliability criterion of 0.95, as they fail to varying reliability criteria. Among the techniques evaluated, the to meet the reliability constraints beyond this point, as evidenced by HWWO algorithm demonstrates superior performance in accomplishing the findings illustrated in Fig. 16. Until the 0.98 reliability threshold, 19 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 15. Graphical representation of Table 10. Table 11 𝐶𝑇𝑇 𝐴 values of FFT applications across diverse task quantities. |𝑋| Performance Algorithm metric DECM EPM REREC ESRG HWWO 95 Best 1.5E+1 2.57E+2 1.9E+1 2.31E+2 1.2E+1 Mean 2.62E+1 2.73E+2 2.6E+1 2.66E+2 2.2E+1 Worst 2.80E+1 3.27E+2 2.91E+1 3.25E+2 2.6E+1 St. Dev 3.15E+1 1.19E+2 5.4E+1 2.4E+2 4.8E+0 223 Best 2.1E+1 5.04E+2 3.9E+1 4.6E+2 2.1E+1 Mean 2.7E+1 5.73E+2 4.3E+1 4.8E+2 2.7E+1 Worst 3.8E+1 5.91E+2 4.3E+1 5.3E+2 3.1E+1 St. Dev 3.42E−1 4.21E+2 2.06E+0 3.07E+2 3.42E+0 511 Best 5.6E+1 3.5E+3 6.3E+1 3.2E+3 4.7E+1 Mean 5.81E+1 3.6E+3 6.8E+1 3.4E+3 4.7E+1 Worst 6.04E+1 3.6E+3 7.3E+1 3.5E+3 4.92E+1 St. Dev 3.66E+1 3.01E−1 7.2E+1 6.7E+0 8.9E−1 1151 Best 2.62E+2 4.16E+3 3.31E+2 4.73E+3 2.49E+2 Mean 2.74E+2 4.16E+3 3.31E+2 4.73E+3 2.51E+2 Worst 2.74E+2 4.47E+3 3.47E+2 4.73E+3 2.51E+2 St. Dev 3.31E−1 1.71E+3 1.08E−1 3.26E−1 3.31E−1 2559 Best 4.7E+2 8.33E+3 5.4E+2 8.72E+3 3.48E+2 Mean 4.8E+2 8.42E+3 5.6E+2 8.74E+3 3.71E+2 Worst 4.8E+2 8.46E+3 5.6E+2 8.83E+3 3.71E+2 St. Dev 1.07E+1 3.72E+1 1.07E+1 6.01E+2 7.06E+2 DECM, REREC, and HWWO successfully fulfill the reliability constraints larger-scale scenarios), while simultaneously exploring the impacts of in the majority of scenarios. Notably, among these three techniques, 𝜌 ranging from 16 to 256. the HWWO algorithm demonstrates more favorable results by further The scheduling length ratio (SLR) is a widely adopted metric em- optimizing energy consumption through an expanded exploration of ployed for evaluating and contrasting various scheduling algorithms. It processor and frequency combination possibilities. HWWO surpasses is quantified as the ratio of the makespan to the cumulative sum of the DECM and REREC in energy savings, reducing consumption by 33% minimum execution times of all tasks residing on the critical path of the and 36% on average, correspondingly. However, it is pertinent to note DAG [86]. This can be expressed through the following mathematical that none of the algorithms evaluated can achieve the stringent 0.99 formulation: reliability requirement. 𝑀𝑆(𝐺) 𝑆𝐿𝑅 = ∑ (44) 𝜏𝑖 ∈𝐶𝑃𝑀𝐼𝑁 𝑚𝑖𝑛𝑌𝑙 ∈𝑌 (𝑤 ̂ 𝑖,𝑙 ) Experiment 3: The current experiment examines the SLR and CCR metrics for a comprehensive FFT application, considering variations The data presented in Table 13 and visually represented in Fig. in 𝜌. This experimental approach incorporates a stringent deadline 18 demonstrates the average performance of various tasks scheduling requirement, expressed as 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4, where the com- algorithms in terms of the SLR metric. The proposed HWWO algorithm plexity of the parallel application is inherently tied to the number of exhibited the lowest SLR values across all experiments, outperforming constituent tasks it encompasses. The study systematically alters |𝑋| the other techniques evaluated. Concerning SLR, HWWO established from 95 (representing smaller-scale scenarios) to 2559 (representing itself as the superior approach. Across all task sizes, the HEFT algorithm 20 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 16. Graphical representation of actual reliability values under varying reliability constraints (in case of FFT). Table 12 Energy consumption assessment for FFT applications with varying reliability criteria. Reliability Performance Algorithm goal metric HEFT DECM EPM REREC ESRG HWWO 0.91 Best 4.44E+4 2.36E+4 3.73E+4 2.46E+4 3.53E+4 1.25E+4 Mean 4.51E+4 2.45E+4 3.81E+4 2.47E+4 3.71E+4 1.29E+4 Worst 4.58E+4 2.47E+4 3.83E+4 2.53E+4 3.83E+4 1.32E+4 St. Dev 4.23E+1 4.13E−2 5.24E+2 5.81E−2 6.34E+2 5.24E−4 0.92 Best 4.74E+4 2.43E+4 3.89E+4 2.54E+4 3.83E+4 1.71E+4 Mean 4.81E+4 2.47E+4 3.91E+4 2.54E+4 3.86E+4 1.78E+4 Worst 4.92E+4 2.47E+4 3.91E+4 2.54E+4 3.86E+4 1.78E+4 St. Dev 7.32E+4 6.28E−11 7.04E−4 6.28E−11 5.34E−4 4.02E−7 0.93 Best 5.64E+4 3.71E+4 4.28E+4 3.71E+4 4.06E+4 2.71E+4 Mean 5.72E+4 3.78E+4 4.33E+4 3.85E+4 4.37E+4 2.88E+4 Worst 5.82E+4 3.88E+4 4.47E+4 3.92E+4 4.49E+4 2.94E+4 St. Dev 3.28E+4 7.12E+4 2.02E+4 5.08E+4 2.02E+4 2.02E+4 0.94 Best 6.74E+4 3.87E+4 4.89E+4 3.81E+4 4.82E+4 2.87E+4 Mean 6.74E+4 3.93E+4 4.93E+4 3.95E+4 4.89E+4 2.97E+4 Worst 6.74E+4 3.99E+4 4.97E+4 3.99E+4 4.97E+4 2.99E+4 St. Dev 3.28E+4 1.12E+4 7.02E+4 1.12E+4 5.52E+4 6.42E+4 0.95 Best 8.45E+4 6.17E+4 6.84E+4 6.37E+4 6.71E+4 4.24E+4 Mean 8.48E+4 6.28E+4 6.88E+4 6.37E+4 6.82E+4 4.46E+4 Worst 8.51E+4 6.39E+4 6.95E+4 6.37E+4 6.88E+4 4.53E+4 St. Dev 5.82E+4 3.72E+2 1.80E+4 4.63E−8 8.75E+4 7.37E+4 0.96 Best – 6.66E+4 – 6.71E+4 – 4.51E+4 Mean – 6.78E+4 – 6.87E+4 – 4.51E+4 Worst – 6.78E+4 – 6.91E+4 – 4.63E+4 St. Dev – 3.72E+4 – 4.63E−4 – 1.17E−7 0.97 Best – 7.05E+4 – 7.21E+4 – 6.19E+4 Mean – 7.08E+4 – 7.21E+4 – 6.19E+4 Worst – 7.09E+4 – 7.21E+4 – 6.19E+4 St. Dev – 3.13E+3 – 3.33E−11 – 3.33E−11 0.98 Best – 8.29E+4 – 8.41E+4 – 7.39E+4 Mean – 8.31E+4 – 8.48E+4 – 7.57E+4 Worst – 8.34E+4 – 8.48E+4 – 7.61E+4 St. Dev – 6.37E+2 – 1.17E+2 – 4.52E+3 0.99 Best – – – – – – Mean – – – – – – Worst – – – – – – St. Dev – – – – – – consistently generated the poorest schedules, trailing behind EPM and size. The average SLR performance of HWWO across all generated ESRG. Initially, EPM underperformed compared to ESRG, but as the graphs exceeded that of the DECM algorithm by 15% and the REREC number of tasks increased, its performance surpassed that of ESRG. algorithm by 20.98%. Notably, in scenarios where every path within the DAG constituted a The communication to computation ratio (CCR) is a metric that critical path, the DECM and REREC algorithms achieved comparable quantifies the relative significance of communication overhead by di- and superior results to HEFT, EPM, and ESRG, regardless of the input viding the cumulative communication times across all edges by the 21 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 13 Average SLR for all algorithms with respect to various tasks (in case of FFT). |𝑋| Average SLR HEFT DECM EPM REREC ESRG HWWO 95 5.1E+1 3.4E+1 4.2E+1 3.7E+1 4.0E+1 2.4E+1 223 6.2E+1 4.6E+1 4.9E+1 4.6E+1 4.9E+1 3.8E+1 511 8.8E+1 5.2E+1 6.5E+1 5.8E+1 6.2E+1 4.7E+1 1151 1.1E+2 7.8E+1 8.8E+1 8.5E+1 9.2E+1 6.9E+1 2559 1.3E+2 8.3E+1 9.2E+1 8.9E+1 9.6E+1 7.6E+1 Fig. 17. Graphical representation of Table 12. Fig. 18. Graphical representation of Table 13. total execution times across all nodes in a DAG. Fig. 19 illustrates the DECM and REREC are comparable across different CCR values. How- average SLR performance of various algorithms as a function of the ever, HWWO emerged as the top-performing algorithm, yielding the CCR. When considering CCR values, HEFT consistently exhibited the best SLR outcomes for all CCR values considered. Notably, the average poorest SLR results, surpassed by both EPM and ESRG, while DECM, SLR performance of HWWO across all generated graphs surpassed that REREC, and HWWO demonstrated their ability to generate superior of DECM by 14.16% and REREC by 19.91%. schedules compared to these algorithms. The schedules produced by 22 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 19. Graphical representation of SLR values with respect to CCR (in case of FFT). 6.2.3. Scenario 3 To rigorously assess the proposed technique’s effectiveness, the study conducts a comprehensive evaluation of the HWWO algorithm’s performance through an analysis of the gaussian elimination (GE) problem. The visual representation in Fig. 20 depicts a parallel im- plementation of the GE application, incorporating a critical parameter value of 𝜌 = 5, as described in [8,86]. Notably, the total number of tasks, denoted as |𝑋|, is dynamically determined by the expression 2 |𝑋| = 𝜌 +𝜌−2 2 . Specifically, when 𝜌 = 5, the resulting task count is |𝑋| = 14, as illustrated in Fig. 20. This scenario highlights the intricate interplay between the parameters 𝜌 and |𝑋| in the context of parallel computing applications. The following three experiments are conducted to evaluate the performance of the proposed approach using the GE application as a benchmark. Experiment 4: The overarching goal of this experimental endeavor is to conduct a meticulous comparative assessment of the proposed HWWO technique against existing algorithms. The primary goal is placed on evaluating their respective performances concerning total energy con- sumption and computational time requirements within the realm of parallel applications involving GE. Underpinning this experimental study is a stringent deadline and reliability constraints, formulated as 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4 and 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) = 0.90 respectively, where the complexity of the parallel application is intrinsically linked to the quantity of constituent tasks it comprises. To comprehensively analyze the techniques’ behavior, the study methodically varies the task count Fig. 20. DAG of GE application with 𝜌 = 5. |𝑋| over a substantial range, spanning from 90 tasks (representing smaller-scale scenarios) to 2555 tasks (larger-scale scenarios), while concurrently investigating the effects of 𝜌 ranging from 13 to 71. in Fig. 21, providing a visual comparative analysis of the data from Table 14. Tables 14 and 15 present the outcomes from using GE applications with varying 𝜌 values. In all experiments, HEFT (without DVFS tech- nique) consistently consumes more energy. Table 14 shows that the parameter |𝑋| has a wide range of values from 90 to 2555. This high- Table 15 shows that the EPM algorithm requires significantly higher lights the superior energy efficiency of the HWWO algorithm compared 𝐶𝑇𝑇 𝐴 . However, as the number of tasks increases, ESRG outperforms to other existing algorithms. As the number of tasks increases, both the other three algorithms in terms of higher energy values. The the DECM and REREC algorithms exhibit similar performance levels. proposed HWWO algorithm’s 𝐶𝑇𝑇 𝐴 is projected to be 33.4% to 37.39% Notably, the ESRG algorithm outperforms EPM in terms of lower energy of the computational time required by DECM and REREC algorithms. consumption up to 495 tasks. Beyond that, EPM gradually improves its In terms of performance metrics, for |𝑋| values ranging from 90 to results but at the cost of higher energy usage compared to DECM and 2555, the HWWO algorithm’s 𝐶𝑇𝑇 𝐴 closely follows and consistently REREC. The best outcomes, highlighted in bold, are further illustrated outperforms the DECM algorithm. 23 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 14 Energy consumption analysis for GE parallel applications across task configurations. |𝑋| Performance Algorithm metric HEFT DECM EPM REREC ESRG HWWO 90 Best 3.18E+4 3.36E+3 7.88E+3 3.77E+3 7.81E+3 2.78E+3 Mean 3.33E+4 3.43E+3 7.94E+3 3.86E+3 7.89E+3 2.81E+3 Worst 3.38E+4 3.43E+3 7.94E+3 3.93E+3 7.94E+3 2.88E+3 St. Dev 5.04E+1 1.42E+0 3.31E+1 4.13E+1 2.13E+1 5.07E+1 230 Best 4.73E+4 7.93E+3 8.63E+3 7.93E+3 8.49E+3 4.71E+3 Mean 4.81E+4 7.93E+3 8.77E+3 7.93E+3 8.53E+3 4.79E+3 Worst 4.81E+4 7.93E+3 8.78E+3 7.93E+3 8.59E+3 4.83E+3 St. Dev 6.20E+4 2.32E−7 3.26E+1 2.32E−7 3.80E+1 2.32E+1 495 Best 6.27E+4 2.62E+4 3.08E+4 2.67E+4 3.05E+4 9.68E+3 Mean 6.27E+4 2.62E+4 3.16E+4 2.73E+4 3.11E+4 9.68E+3 Worst 6.27E+4 2.62E+4 3.16E+4 2.83E+4 3.16E+4 9.72E+3 St. Dev 8.08E−4 3.47E−7 2.34E+2 1.28E+3 4.44E+1 5.54E−3 1127 Best 9.44E+4 6.05E+4 7.03E+4 6.35E+4 7.59E+4 4.25E+4 Mean 9.44E+4 6.05E+4 7.13E+4 6.35E+4 7.62E+4 4.25E+4 Worst 9.49E+4 6.11E+4 7.13E+4 6.41E+4 7.67E+4 4.25E+4 St. Dev 1.13E+2 1.25E+4 2.14E+4 1.15E+4 5.03E+4 7.34E−9 2555 Best 9.75E+4 7.65E+4 9.44E+4 7.97E+4 9.54E+4 7.13E+4 Mean 9.82E+4 7.67E+4 9.51E+4 7.97E+4 9.54E+4 7.17E+4 Worst 9.88E+4 7.67E+4 9.59E+4 7.97E+4 9.59E+4 7.17E+4 St. Dev 1.02E+3 2.37E−1 4.80E+4 5.87E−7 3.80E+1 2.37E+2 Fig. 21. Graphical representation of Table 14. reliability constraint from 0.91 to 0.98, but none of the algorithms can meet the stringent 0.99 requirement. Experiment 5: This experiment evaluates the reliability metrics and total energy consumption of an extensive GE application under varying Fig. 23 visually represents the data from Table 16, showing the reliability constraints. The experimental configuration involves 1127 energy consumption of GE applications under different reliability con- tasks, with 𝜌 = 47. Additionally, the reliability goal, 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺), is straints. Up to 0.98 reliability, DECM, REREC, and HWWO are more systematically varied from 0.91 to 0.99 in increments of 0.01, enabling energy-efficient than HEFT, EPM, and ESRG. While HEFT, EPM, and an assessment of the corresponding effects on reliability performance ESRG fail to meet reliability constraints beyond 0.95, as evident from and energy utilization. Fig. 22, DECM, REREC, and HWWO successfully fulfill the reliability The graphical representation in Fig. 22 illustrates the actual reliabil- requirements up to 0.98 in most cases. Among them, HWWO achieves ity values attained by the GE application when subjected to varying reli- better energy savings by exploring more processor and frequency com- ability criteria. Among the techniques evaluated, the HWWO algorithm binations, reducing consumption by 33% and 37% compared to DECM demonstrates superior performance in accomplishing the specified reli- and REREC, respectively. However, none of the algorithms meet the ability goals compared to other existing methods. In contrast, the HEFT, stringent 0.99 reliability requirement. EPM, and ESRG algorithms struggle to meet the reliability constraints in most scenarios. While they can comply when 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) is between Experiment 6: The current experiment examines the SLR and CCR 0.91 and 0.95, their performance deteriorates beyond that range. On metrics for a comprehensive GE application, considering variations the other hand, DECM, REREC, and HWWO successfully satisfy the in 𝜌. This experimental approach incorporates a stringent deadline 24 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 15 𝐶𝑇𝑇 𝐴 values of GE applications across diverse task quantities. |𝑋| Performance Algorithm metric DECM EPM REREC ESRG HWWO 90 Best 3.5E+1 4.31E+2 3.5E+1 3.81E+2 2.6E+1 Mean 3.6E+1 4.43E+2 3.6E+1 3.91E+2 2.6E+1 Worst 3.8E+1 4.43E+2 3.8E+1 3.95E+2 2.6E+1 St. Dev 7.5E−1 5.6E+0 7.5E−1 5.8E+0 0 230 Best 5.1E+1 7.14E+2 5.1E+1 6.84E+2 3.9E+1 Mean 5.9E+1 7.23E+2 6.7E+1 6.93E+2 4.4E+1 Worst 6.8E+1 7.41E+2 6.7E+1 6.94E+2 4.9E+1 St. Dev 6.4E+0 1.21E+2 7.4E+0 3.20E−1 4.02E−1 495 Best 6.4E+1 9.5E+3 7.1E+1 8.5E+3 6.4E+1 Mean 6.9E+1 1.6E+4 8.3E+1 1.3E+4 6.9E+1 Worst 7.2E+1 2.2E+4 9.2E+1 2.7E+4 7.2E+1 St. Dev 3.3E+0 5.10E+3 8.3E+1 2.09E+2 3.3E+0 1127 Best 7.7E+2 8.19E+3 7.9E+2 8.73E+3 7.49E+2 Mean 7.88E+2 8.25E+3 8.15E+2 8.83E+3 7.67E+2 Worst 7.94E+2 8.37E+3 8.7E+2 8.87E+3 7.77E+2 St. Dev 4.31E+1 2.71E+3 7.08E+1 5.08E+0 1.1E+1 2555 Best 8.57E+2 9.03E+3 8.77E+2 9.12E+3 8.57E+2 Mean 8.61E+2 9.22E+3 8.81E+2 9.28E+3 8.61E+2 Worst 8.69E+2 9.36E+3 8.91E+2 9.39E+3 8.69E+2 St. Dev 4.9E−2 3.32E+1 6.07E+1 6.17E+1 4.9E+1 Fig. 22. Graphical representation of actual reliability values under varying reliability constraints (in case of GE). constraint, formulated as 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4, where the com- results, while DECM, REREC, and HWWO generated superior schedules plexity of the parallel application is inherently tied to the number of compared to HEFT, EPM, and ESRG. DECM and REREC produced com- constituent tasks it encompasses. The study systematically alters |𝑋| parable schedules across different CCR values. However, HWWO out- from 90 (representing smaller-scale scenarios) to 2555 (representing performed all others, yielding the best SLR outcomes for all considered larger-scale scenarios), while simultaneously exploring the impacts of CCR values. 𝜌 ranging from 13 to 71. Stage II Table 17 and Fig. 24 demonstrate the average SLR performance of various tasks scheduling algorithms. The proposed HWWO algo- 6.3. Benchmark analysis with metaheuristic algorithms rithm exhibited the lowest SLR values, outperforming others. HWWO emerged as the superior approach in terms of SLR. Across all task sizes, In this stage the proposed algorithm HWWO performance is eval- HEFT consistently generated the poorest schedules, trailing EPM and uated across three scenarios and compared with several metaheuristic ESRG. Initially, EPM underperformed compared to ESRG but surpassed methods using various metrics. The evaluation considers different task it as task count increased. In critical DAG path scenarios, DECM and and processor configurations, as well as benchmark tests involving REREC outperformed HEFT, EPM, and ESRG, regardless of input size. unimodal functions. Additionally, experiments are conducted for tasks Fig. 25 shows the average SLR performance of various algorithms scheduling in a multiprocessing environment, with input parameters as a function of CCR. HEFT consistently exhibited the poorest SLR described in Table 18. After simulations, a comprehensive assessment 25 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 16 Energy consumption assessment for GE applications with varying reliability criteria. Reliability Performance Algorithm goal metric HEFT DECM EPM REREC ESRG HWWO 0.91 Best 4.54E+4 3.45E+4 3.83E+4 3.53E+4 3.73E+4 2.55E+4 Mean 4.54E+4 3.45E+4 3.83E+4 3.53E+4 3.73E+4 2.55E+4 Worst 4.58E+4 3.45E+4 3.83E+4 3.53E+4 3.79E+4 2.55E+4 St. Dev 4.23E+2 4.13E−4 5.24E−4 1.13E−4 6.34E−2 5.24E−8 0.92 Best 4.74E+4 3.77E+4 4.18E+4 3.96E+4 4.16E+4 2.88E+4 Mean 4.81E+4 3.78E+4 4.19E+4 3.98E+4 4.19E+4 2.88E+4 Worst 4.92E+4 3.78E+4 4.19E+4 3.98E+4 4.19E+4 2.88E+4 St. Dev 7.40E+2 4.27E−1 7.04E+1 9.42E+1 2.04E+2 7.02E−7 0.93 Best 5.74E+4 3.81E+4 4.28E+4 4.18E+4 4.28E+4 2.91E+4 Mean 5.74E+4 3.83E+4 4.33E+4 4.20E+4 4.33E+4 2.94E+4 Worst 5.74E+4 3.86E+4 4.47E+4 4.26E+4 4.47E+4 2.95E+4 St. Dev 3.28E−4 4.40E+3 8.02E+2 3.30E+3 8.02E+2 1.69E+2 0.94 Best 6.64E+4 3.97E+4 4.93E+4 4.51E+4 4.97E+4 3.97E+4 Mean 6.69E+4 3.97E+4 4.93E+4 4.65E+4 4.98E+4 3.97E+4 Worst 6.74E+4 3.99E+4 4.93E+4 4.77E+4 4.98E+4 3.99E+4 St. Dev 4.08E+2 9.42E+1 7.12E+2 3.12E+2 4.13E+2 9.42E+1 0.95 Best 8.45E+4 6.52E+4 6.84E+4 6.63E+4 6.84E+4 4.44E+4 Mean 8.48E+4 6.52E+4 6.84E+4 6.67E+4 6.84E+4 4.46E+4 Worst 8.51E+4 6.59E+4 6.85E+4 6.67E+4 6.85E+4 4.53E+4 St. Dev 5.02E+3 3.72E−4 4.07E+3 4.63E+3 4.07E+3 5.17E+2 0.96 Best – 6.76E+4 – 6.90E+4 – 4.86E+4 Mean – 6.78E+4 – 6.90E+4 – 4.86E+4 Worst – 6.78E+4 – 6.90E+4 – 4.86E+4 St. Dev – 3.72E−2 – 4.63E−5 – 4.06E−13 0.97 Best – 7.15E+4 – 7.25E+4 – 6.55E+4 Mean – 7.18E+4 – 7.25E+4 – 6.55E+4 Worst – 7.18E+4 – 7.25E+4 – 6.55E+4 St. Dev – 4.03E+2 – 5.33E−11 – 4.22E−11 0.98 Best – 9.37E+4 – 9.41E+4 – 7.31E+4 Mean – 9.37E+4 – 9.41E+4 – 7.31E+4 Worst – 9.37E+4 – 9.48E+4 – 7.37E+4 St. Dev – 4.37E−9 – 1.17E+2 – 4.52E−4 0.99 Best – – – – – – Mean – – – – – – Worst – – – – – – St. Dev – – – – – – Fig. 23. Graphical representation of Table 16. is carried out, calculating metrics such as average execution time, the algorithm’s effectiveness in terms of energy consumption, system standard deviation, and mean across iterations. The results highlight reliability, resource utilization, and sensitivity analysis. 26 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 17 Average SLR for all algorithms with respect to various tasks (in case of GE). |𝑋| Average SLR HEFT DECM EPM REREC ESRG HWWO 90 6.3E+1 3.4E+1 4.4E+1 3.9E+1 4.0E+1 3.4E+1 230 7.2E+1 4.7E+1 5.8E+1 4.9E+1 5.4E+1 3.8E+1 495 8.9E+1 5.8E+1 6.6E+1 6.1E+1 6.2E+1 4.9E+1 1127 1.2E+2 7.8E+1 9.3E+1 8.5E+1 9.8E+1 7.2E+1 2555 1.3E+2 8.7E+1 9.5E+1 8.9E+1 9.8E+1 7.9E+1 Fig. 24. Graphical representation of Table 17. Fig. 25. Graphical representation of SLR values with respect to CCR (in case of GE). 6.3.1. Scenario 1 of processors constant. The detailed findings and analysis are presented subsequently. The study aims to thoroughly evaluate the effectiveness of the pro- posed HWWO-based approach by testing it with different numbers of Tasks range: 100–1000 tasks. Seven well-known metaheuristic algorithms – PSO, ACO, KH, DA, Processor count: 100 AHA, GWO, and WOA – are employed alongside the HWWO algorithm. These algorithms are utilized to assess the performance of the HWWO For an impartial and consistent evaluation, the parameter settings technique when varying the number of tasks while keeping the number of the seven algorithms remained unchanged from their default values 27 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 26. Graphical representation of Table 19. applications that are randomly generated via a DAG genera- Table 18 tor [100], where the deadline and reliability requirements for Parameter setting for stage II. completing each application are calculated as 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ Algorithm Parameter 1.4 and 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) = 0.90 respectively. PSO [57] Inertia weight = 0.4-0.9 A Table 19 is presented that compares the energy consump- Cognitive component = 1.50 tion results from the proposed algorithm against seven other Social component= 2 metaheuristic algorithms. This allows a thorough analysis and ACO [55] Evaporation rate = 0.2 Weight of pheromone on decision = 0.5 side-by-side comparison of the energy efficiencies across these Weight of heuristic information on decision = 0.5 different solution techniques when dealing with the randomized Q = 2 parallel application workloads with the specified deadlines. KH [59] Foraging motion = 0.2 The data presented in Table 19 indicates that when evaluating Induced motion = 0.006 performance metrics, the ACO algorithm consistently exhibits Inertia weight = 0.5 DA [67] Separation weight = 0.12 higher energy consumption compared to the seven other algo- Alignment weight = 0.12 rithms under consideration. In the initial stages, the AHA and DA Cohesion weight = 0.75 algorithms deliver superior and comparable results to KH and Food factor = 1 PSO, respectively. However, as the problem size scales up, the Enemy factor =1 AHA [67] Inertia weight = 0.5 KH and PSO algorithms outperform AHA and DA, demonstrat- Local search probability = 0.5 ing more efficient outcomes. Among the remaining algorithms, → GWO [29] 𝜇 = [0, 2] the GWO technique demonstrates its strength by outperforming 𝑙1 , 𝑙2 = [0,1] the WOA method while attaining comparable energy efficiency. → 𝜁 = [−1, 1] Strikingly, the newly proposed HWWO algorithm surpasses all WOA [28] 𝜈 = [−1, 1] the other contenders, exhibiting substantially lower energy con- → 𝜇 = [0, 2] sumption levels. The HWWO algorithm establishes itself as the 𝑙1 , 𝑙2 = [0,1] leading performer in terms of optimizing energy consumption Proposed HWWO 𝑤̂ 𝑖,𝑙 (𝑚𝑠) = [10, 100] 𝑐̂𝑖,𝑘 (𝑚𝑠) = [10, 100] across the diverse set of test scenarios explored in this evalu- 𝑃𝑙,𝑠 = [0.1, 0.5] ation. The graphical representation in Fig. 26 depicts energy 𝑃𝑙,𝑖𝑛𝑑 = [0.03, 0.07] consumption levels across different sets of tasks. A clear pattern 𝐶𝑙,𝑒𝑓 = [0.8, 1.2] emerges: higher energy usage as more tasks is added. However, 𝑚𝑙 = [2.5, 3.0] 𝑓𝑙,𝑚𝑎𝑥 = 1 GHz the proposed algorithm’s energy consumption values are notice- 𝜆𝑙,𝑚𝑎𝑥 = [0.0003, 0.0009] ably lower than existing methods, indicating superior efficiency. For 100 processors, the HWWO algorithm minimizes energy con- sumption by 18%–24% less than GWO and WOA respectively. This substantial reduction highlights the proposed approach’s as presented in Table 18. The findings accentuated the algorithms’ energy-saving advantages, especially with increasing tasks. proficiency in several key areas, including energy efficiency, system ii System reliability reliability, resource utilization, and sensitivity analysis across a range This part evaluates the proposed algorithm’s effectiveness by ex- of input variations. amining reliability across varying task combinations. It analyzes reliability metrics for different task counts, targeting a reliability i Energy consumption goal of 𝑅𝑒(𝑚𝑖𝑛) (𝐺) ≤ 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) ≤ 𝑅𝑒(𝑚𝑎𝑥) (𝐺) or 0.88371 ≤ The goal of this evaluation is to assess the proposed algorithm’s 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) ≤ 0.98577 at 0.90. The evaluation utilizes randomly performance by analyzing its energy consumption for different generated parallel applications with deadlines calculated as combinations of tasks. The assessment utilizes a set of parallel 28 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 27. Reliability outcomes for metaheuristic algorithms with varying task numbers. 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4. Fig. 27 presents a comparative analysis the experimental findings validates that the proposed HWWO of reliability results obtained from the proposed algorithm and approach facilitates more efficient resource utilization than the seven other metaheuristic algorithms. This comparison enables a existing metaheuristic frameworks. thorough assessment of reliability performance across these solu- iv Sensitivity analysis tion techniques when handling randomized parallel application In this subsection, the performance of the proposed method is workloads with specified deadlines. evaluated through sensitivity analysis. Sensitivity analysis is a Fig. 27 highlights the performance metrics, indicating that the technique employed to ascertain the extent to which the output ACO algorithm consistently exhibits lower system reliability of a model is influenced by variations in the input parameters. compared to the seven other algorithms evaluated. Initially, It helps to identify the inputs that have the most significant AHA and DA algorithms demonstrate superior and comparable influence on the output and assess the model’s robustness to reliability results to KH and PSO respectively, but as the problem variations in these inputs. size increases, the latter two outperform the former, exhibiting The assessment encompasses a variety of randomly generated more efficient outcomes. Among the remaining algorithms, the parallel applications, where the application deadline is deter- GWO technique outperforms the WOA method, while the newly mined by the formula 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4. In this evaluation, proposed HWWO algorithm surpasses all other contenders, ex- the sensitivity of the proposed HWWO model is investigated hibiting substantially higher reliability levels in most cases and concerning the completion time of tasks scheduling. The overall maximizing system reliability by 8%–9% more than GWO and completion time for HWWO and other existing metaheuristic WOA for 100 processors. approaches is presented in Table 20 with varying task quantities, iii Resource utilization for which a sensitivity analysis is conducted. The optimiza- This meticulously designed study aims to evaluate the effec- tion problem is addressed using a One-at-a-Time (OAT) based tiveness of the proposed algorithm by thoroughly examining method, where the proposed technique’s performance is assessed resource utilization across various task combinations. The exam- through sensitivity analysis. ination of resource utilization [78,79] mainly focuses on com- From Table 20, it can be concluded that the proposed HWWO putation time and compares it with other existing models. The technique gave superior outcomes compared to other techniques. assessment encompasses a variety of randomly generated par- It decreased computation time by 46.07%, 47.59%, 43.81%, allel applications, where the application deadline is determined 46.11%, 41.84%, 28.85%, and 30.27% in comparison to PSO, by the formula 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4. To offer an in-depth and ACO, KH, DA, AHA, GWO, and WOA respectively. To further an- thorough comparative analysis of resource utilization, Fig. 28 (in alyze the performance, the average sensitivity of each algorithm %) has been carefully crafted. This figure showcases the optimal is calculated using the OAT technique, as detailed in Table 21. results achieved by eight distinct metaheuristic algorithms. The table analysis shows the proposed HWWO model has the Optimal resource utilization is a critical factor in achieving lowest average sensitivity ratio (0.19), indicating least sensitivity profitability for heterogeneous computing systems. Higher re- to task number changes among the algorithms. This lower sen- source utilization directly corresponds to increased profits for sitivity suggests HWWO is more robust and reliable for varying service providers. The figure presents a comparative analysis workloads, making it preferable in environments with fluctuat- of resource utilization between the proposed HWWO approach ing task quantities. and established metaheuristic frameworks. The results indicate that the HWWO algorithm demonstrates superior performance, substantially enhancing resource utilization by 42%, 62%, 22%, 6.3.2. Scenario 2 42.48%, 21.86%, 11.83%, and 14% in comparison to PSO, ACO, This part focuses on an in-depth evaluation of the proposed HWWO- KH, DA, AHA, GWO, and WOA respectively, across a range based approach by experimenting with different processor counts. The of computational tasks. The empirical evidence derived from HWWO algorithm is tested alongside seven well-established meta- heuristic algorithms: PSO, ACO, KH, DA, AHA, GWO, and WOA. These 29 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 19 Analysis of energy consumptions for varying tasks. |𝑋| Performance Algorithm metric PSO ACO KH DA AHA GWO WOA HWWO 100 Best 6.08E+3 6.32E+3 5.25E+3 5.82E+3 5.25E+3 3.57E+3 3.67E+3 3.52E+3 Mean 6.13E+3 6.37E+3 5.25E+3 5.84E+3 5.25E+3 3.57E+3 3.67E+3 3.52E+3 Worst 6.13E+3 6.37E+3 5.29E+3 5.84E+3 5.29E+3 3.57E+3 3.68E+3 3.52E+3 St. Dev 5.30E+1 7.23E+2 5.55E−3 4.15E+1 5.55E−3 1.13E−13 4.61E−4 1.13E−13 200 Best 6.51E+3 6.74E+3 5.83E+3 6.31E+3 5.69E+3 3.92E+3 3.88E+3 3.88E+3 Mean 6.51E+3 6.79E+3 5.85E+3 6.31E+3 5.69E+3 3.93E+3 3.89E+3 3.89E+3 Worst 6.51E+3 6.79E+3 5.87E+3 6.35E+3 5.69E+3 3.95E+3 3.89E+3 3.89E+3 St. Dev 2.07E−4 7.03E+1 3.33E+1 7.09E+1 3.08E−7 5.12E+1 1.02E+1 1.02E+1 300 Best 6.97E+3 7.58E+3 6.42E+3 6.93E+3 6.31E+3 4.87E+3 4.64E+3 4.58E+3 Mean 6.99E+3 7.58E+3 6.49E+3 6.95E+3 6.33E+3 4.87E+3 4.66E+3 4.59E+3 Worst 6.99E+3 7.58E+3 6.49E+3 6.95E+3 6.38E+3 5.95E+3 4.69E+3 4.61E+3 St. Dev 7.35E+1 2.43E−6 3.11E+2 7.61E+1 1.01E+1 4.02E−1 3.12E+0 7.17E+1 400 Best 7.48E+3 7.98E+3 6.89E+3 7.23E+3 6.77E+3 6.05E+3 6.05E+3 6.05E+3 Mean 7.48E+3 7.99E+3 6.91E+3 7.28E+3 6.77E+3 6.05E+3 6.05E+3 6.05E+3 Worst 7.48E+3 7.99E+3 6.96E+3 7.28E+3 6.77E+3 6.05E+3 6.05E+3 6.05E+3 St. Dev 7.15E−4 1.85E−1 1.13E+2 4.25E−4 1.33E−4 3.02E−8 3.02E−8 3.02E−8 500 Best 8.36E+3 9.39E+3 7.49E+3 8.16E+3 7.89E+3 6.76E+3 6.66E+3 6.57E+3 Mean 8.53E+3 9.46E+3 7.49E+3 8.23E+3 7.89E+3 6.81E+3 6.68E+3 6.65E+3 Worst 8.67E+3 9.51E+3 7.49E+3 8.27E+3 7.92E+3 6.81E+3 6.68E+3 6.65E+3 St. Dev 1.81E+1 1.33E+1 2.73E−6 1.44E+0 4.53E+1 4.11E+3 5.00E+1 9.01E+3 600 Best 1.31E+4 1.63E+4 9.78E+3 1.53E+4 9.97E+3 8.77E+3 9.04E+3 8.52E+3 Mean 1.33E+4 1.67E+4 9.78E+3 1.55E+4 9.99E+3 8.79E+3 9.14E+3 8.52E+3 Worst 1.37E+4 1.67E+4 9.81E+3 1.59E+4 9.99E+3 8.93E+3 9.14E+3 8.52E+3 St. Dev 6.01E−1 4.35E−2 4.47E+0 2.33E−4 3.43E−2 9.11E−1 7.12E+1 6.16E−12 700 Best 1.83E+4 2.44E+4 1.41E+4 2.34E+4 1.83E+4 9.89E+3 1.03E+4 9.46E+3 Mean 1.83E+4 2.54E+4 1.49E+4 2.39E+4 1.83E+4 9.89E+3 1.11E+4 9.47E+3 Worst 1.83E+4 2.57E+4 1.58E+4 2.43E+4 1.83E+4 9.93E+3 1.22E+4 9.49E+3 St. Dev 3.01E−8 6.63E+4 2.65E+2 2.73E+4 3.01E−8 6.66E−2 3.01E+2 3.22E−6 800 Best 3.15E+4 3.51E+4 2.41E+4 3.35E+4 2.76E+4 1.52E+4 1.94E+4 1.06E+4 Mean 3.35E+4 3.51E+4 2.53E+4 3.35E+4 2.78E+4 1.64E+4 1.94E+4 1.06E+4 Worst 3.35E+4 3.51E+4 2.58E+4 3.37E+4 2.78E+4 1.64E+4 1.94E+4 1.06E+4 St. Dev 2.32E+4 2.32E−7 4.08E+2 2.32E+1 8.22E+1 3.07E+4 8.17E−8 3.07E−8 900 Best 4.15E+4 4.22E+4 3.95E+4 4.19E+4 4.05E+4 3.51E+4 3.91E+4 2.55E+4 Mean 4.15E+4 4.22E+4 3.95E+4 4.22E+4 4.11E+4 3.55E+4 3.95E+4 2.55E+4 Worst 4.15E+4 4.27E+4 3.95E+4 4.29E+4 4.11E+4 3.55E+4 3.95E+4 2.55E+4 St. Dev 1.81E−6 6.21E+0 5.15E−12 1.81E+2 1.81E+1 8.25E+0 5.15E−2 5.15E−12 1000 Best 4.71E+4 5.28E+4 4.13E+4 5.08E+4 4.60E+4 3.81E+4 4.13E+4 3.25E+4 Mean 4.73E+4 5.32E+4 4.23E+4 5.13E+4 4.60E+4 3.85E+4 4.23E+4 3.41E+4 Worst 4.73E+4 5.32E+4 4.23E+4 5.13E+4 4.60E+4 3.85E+4 4.23E+4 3.47E+4 St. Dev 2.01E+1 6.08E+1 1.51E+3 8.08E+4 3.01E−7 4.44E−2 1.51E+3 6.61E+4 Table 20 Comparative analysis of task completion times across various metaheuristic techniques under varying tasks. |𝑋| Algorithm PSO ACO KH DA AHA GWO WOA HWWO 100 1.58E+2 1.58E+2 1.56E+2 1.56E+2 1.35E+2 9.57E+1 1.03E+2 7.62E+1 200 2.01E+2 2.01E+2 1.88E+2 1.98E+2 1.78E+2 1.68E+2 1.22E+2 8.98E+1 300 2.67E+2 2.58E+2 2.42E+2 2.51E+2 2.36E+2 2.18E+2 1.88E+2 1.07E+2 400 4.49E+2 4.68E+2 4.17E+2 4.38E+2 4.17E+2 2.75E+2 2.54E+2 1.59E+2 500 4.96E+2 4.96E+2 4.85E+2 5.09E+2 4.98E+2 3.12E+2 3.37E+2 2.32E+2 600 5.31E+2 5.53E+2 5.08E+2 5.63E+2 5.37E+2 3.78E+2 3.95E+2 2.98E+2 700 5.83E+2 5.96E+2 5.51E+2 5.93E+2 5.83E+2 4.39E+2 4.39E+2 3.55E+2 800 6.15E+2 6.58E+2 6.01E+2 6.55E+2 6.26E+2 5.07E+2 5.07E+2 3.83E+2 900 6.61E+2 7.12E+2 6.66E+2 7.09E+2 6.66E+2 5.61E+2 5.71E+2 4.02E+2 1000 7.11E+2 7.55E+2 6.76E+2 7.38E+2 6.88E+2 6.04E+2 6.04E+2 4.59E+2 Table 21 The average sensitivity for each algorithm of Table 20. Algorithm PSO ACO KH DA AHA GWO WOA HWWO Avg sensitivity 0.37 0.42 0.39 0.38 0.34 0.27 0.29 0.19 algorithms are employed to examine the performance of the HWWO The findings accentuated the algorithms’ proficiency in several key method under varying numbers of processors, while maintaining a con- areas, including energy efficiency, system reliability, resource utiliza- stant number of tasks. The detailed findings and analysis are presented tion, and sensitivity analysis across a range of input variations. subsequently. i Energy consumption Processors range: 100–1000 The goal of this evaluation is to assess the proposed algorithm’s Task count: 1000 performance by analyzing its energy consumption for different 30 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 28. Comparative analysis graph of resource utilization for metaheuristic techniques with varying tasks. Fig. 29. Energy consumption for metaheuristic algorithms with respect to varying processors. combinations of processors. The assessment utilizes a set of mechanism, enabling the HWWO algorithm to outperform oth- parallel applications that are randomly generated via a DAG ers and find better solutions. When tested with 1000 tasks, generator [100], where the deadline and reliability requirements the HWWO algorithm reduced energy consumption by 13.46- for completing each application are calculated as 𝐷𝐿(𝐺) = 23.81% compared to GWO and WOA, respectively. This sub- 𝐿𝐵(𝐺) ∗ 1.4 and 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) = 0.90 respectively. stantial reduction underscores the energy-saving advantages of The energy consumption results displayed in Fig. 29 compare the proposed approach, especially as the number of processors the proposed algorithm against other metaheuristic algorithms. increases. The figure reveals that the ACO algorithm consistently consumes ii System reliability more energy than the other algorithms evaluated. Among these This part evaluates the proposed algorithm’s effectiveness by algorithms, the GWO technique outperforms the WOA method examining reliability across varying processors combinations. It while achieving similar energy efficiency as the AHA algorithm. analyzes reliability metrics for different task counts, targeting Notably, the proposed algorithm exhibits significantly lower a reliability goal of 𝑅𝑒(𝑚𝑖𝑛) (𝐺) ≤ 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) ≤ 𝑅𝑒(𝑚𝑎𝑥) (𝐺) or energy consumption than existing methods, indicating superior 0.88371 ≤ 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) ≤ 0.98577 at 0.90. The evaluation uti- efficiency. This is due to the proposed algorithm defining a cir- lizes randomly generated parallel applications with deadlines cular neighborhood around solutions based on its encirclement calculated as 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4. 31 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 30. Reliability outcomes for metaheuristic algorithms with varying processor numbers. Fig. 30 highlights the performance metrics, indicating that the to 47.94% compared to various metaheuristic techniques. Ad- ACO algorithm consistently exhibits lower system reliability ditionally, Table 23 provides the average sensitivity of each compared to the seven other algorithms evaluated. Among the algorithm, determined through the OAT technique, for further algorithms, the GWO technique outperforms the WOA method, performance analysis. while the newly proposed HWWO algorithm surpasses all other The table analysis unveils that the proposed HWWO model contenders, exhibiting substantially higher reliability levels in demonstrates the minimal average sensitivity ratio (0.22), indi- most cases and maximizing system reliability by 5%–8% more cating its superior resistance to fluctuations in processor avail- than GWO and WOA for 1000 tasks. ability compared to other algorithms. This lower sensitivity iii Resource utilization makes HWWO more robust and reliable for different work- This part intends to assess the efficacy of the proposed algo- loads, making it ideal for environments with varying processor rithmic approach by conducting a comprehensive analysis of re- numbers. source utilization across different processor configurations. The assessment encompasses a variety of randomly generated par- allel applications, where the application deadline is determined 6.3.3. Scenario 3 by the formula 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4. To offer an in-depth and To validate the proposed HWWO algorithm’s efficacy against es- thorough comparative analysis of resource utilization, Fig. 31 (in tablished metaheuristic optimization techniques, this section employs %) has been carefully crafted. This figure showcases the optimal a set of unimodal test functions. These benchmark functions, sourced results achieved by eight distinct metaheuristic algorithms. from [28] and tabulated in Table 24, assess the algorithm’s exploita- The results from figure indicate that the HWWO algorithm tion capabilities and overall optimization performance. Ensuring a fair demonstrates superior performance, substantially enhancing re- comparison, all tests utilize a population size of 30, with a maximum source utilization by 26.92%, 29.79%, 19.34%, 16.48%, 16.86%, of 15,000 function evaluations across 500 iterations. Each algorithm 9.83%, and 25.15% in comparison to PSO, ACO, KH, DA, AHA, is executed 30 times independently on these functions. The evaluation GWO, and WOA respectively, across a range of computational metrics, including mean, standard deviation, best and worst fitness processors. The empirical evidence derived from the experi- values from the independent runs, are then computed and presented mental findings validates that the proposed HWWO approach in Table 25. facilitates more efficient resource utilization than the existing An analysis of Table 25, which presents the results for unimodal metaheuristic frameworks. functions, clearly demonstrates the superior exploitation capability of iv Sensitivity analysis the proposed HWWO algorithm. This is evident from the fact that the In this subsection, the effectiveness of the proposed method HWWO algorithm achieves the best mean fitness values in the majority is assessed via sensitivity analysis in relation to different pro- of cases, as indicated by the bold entries. In contrast, the existing algo- cessor counts. The assessment encompasses a variety of ran- rithms being evaluated display comparatively inferior performance. domly generated parallel applications, where the application In evaluating the algorithms’ performance based on the highest deadline is determined by the formula 𝐷𝐿(𝐺) = 𝐿𝐵(𝐺) ∗ 1.4. In fitness scores across 30 runs, it is observed that the HWWO algorithm this evaluation, the sensitivity of the HWWO model concerning outperformed others, securing the highest number of best fitness scores tasks scheduling completion times is explored. Table 22 displays (5/7). In comparison, WOA and GWO attained fewer best fitness scores the overall completion times for HWWO and other existing (1/7 each), while all other algorithms do not achieve the best fitness metaheuristic approaches across varying processors, for which in any of the runs. These results suggest that the HWWO algorithm a sensitivity analysis is performed. demonstrates greater consistency and reliability in attaining optimal Table 22 shows that the HWWO technique yielded better re- fitness values compared to others. sults than other methods, reducing computation time by 30.68% The statistical analysis using the Wilcoxon signed-rank test is pre- sented in Table 26, which evaluates the performance of the HWWO 32 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 22 Comparative analysis of task completion times across various metaheuristic techniques under varying processors. |𝑌 | Algorithm PSO ACO KH DA AHA GWO WOA HWWO 100 1.28E+2 1.28E+2 1.26E+2 9.56E+1 8.62E+1 8.02E+1 1.07E+2 6.82E+1 200 2.12E+2 2.21E+2 1.88E+2 1.88E+2 1.73E+2 1.64E+2 1.85E+2 8.88E+1 300 2.67E+2 2.71E+2 2.71E+2 2.51E+2 2.38E+2 2.38E+2 2.65E+2 1.17E+2 400 4.89E+2 4.98E+2 4.72E+2 4.48E+2 4.37E+2 2.84E+2 4.59E+2 1.54E+2 500 4.98E+2 5.21E+2 4.85E+2 5.19E+2 4.78E+2 3.62E+2 5.19E+2 2.22E+2 600 5.71E+2 5.73E+2 5.64E+2 5.62E+2 5.17E+2 3.73E+2 5.64E+2 2.98E+2 700 6.13E+2 6.26E+2 5.95E+2 5.89E+2 5.73E+2 4.49E+2 5.93E+2 3.75E+2 800 6.65E+2 6.68E+2 6.61E+2 6.46E+2 6.46E+2 5.17E+2 6.55E+2 3.86E+2 900 7.41E+2 7.52E+2 7.36E+2 7.15E+2 6.76E+2 5.61E+2 7.27E+2 4.32E+2 1000 7.53E+2 7.75E+2 7.46E+2 7.43E+2 6.91E+2 6.12E+2 7.43E+2 4.54E+2 Table 23 The average sensitivity for each algorithm of Table 22. Algorithm PSO ACO KH DA AHA GWO WOA HWWO Avg sensitivity 0.41 0.45 0.42 0.39 0.33 0.29 0.37 0.22 Fig. 31. Comparative analysis graph of resource utilization for metaheuristic techniques with varying processors. Table 24 Description of unimodal benchmark functions. Function Dimensions Range 𝑓𝑚𝑖𝑛 ∑𝑛 𝐹1 = 𝑖=1 𝑥2𝑖 30 [−100, 100] 0 ∑𝑛 ∏𝑛 𝐹2 = 𝑖=1 |𝑥𝑖 | + 𝑖=1 |𝑥𝑖 | 30 [−10, 10] 0 ∑𝑛 (∑𝑖 )2 𝐹3 = 𝑖=1 𝑥 𝑗=1 𝑗 30 [−100, 100] 0 𝐹4 = 𝑚𝑎𝑥𝑖 {|𝑥𝑖 |, 1 ≤ 𝑖 ≤ 𝑛} 30 [−100, 100] 0 ∑𝑛−1 [ ] 𝐹5 = 𝑖=1 100(𝑥𝑖+1 − 𝑥2𝑖 )2 + (𝑥𝑖 − 1)2 30 [−30, 30] 0 ∑𝑛 𝐹6 = 𝑖=1 ([𝑥𝑖 + 0.5])2 30 [−100, 100] 0 ∑𝑛 𝐹7 = 𝑖=1 𝑖𝑥4𝑖 + 𝑟𝑎𝑛𝑑𝑜𝑚[0, 1) 30 [−1.28, 1.28] 0 algorithm based on unimodal benchmark functions. This table shows that the proposed algorithm achieves superior performance in solving the rank of the HWWO algorithm in comparison to the second algo- unimodal benchmark problems, demonstrating faster convergence and rithm, focusing on the best fitness values. 𝑇 + represents the superiority greater accuracy compared to existing methods. of the proposed HWWO technique. P-values, calculated at a 5% signifi- Finally, the convergence behavior of the proposed HWWO algo- cance level, test the null hypothesis that the median difference between rithm is depicted through convergence curves compared with other the algorithms is zero. The final row of Table 26 consolidates the counts algorithms in Fig. 32. The convergence rate, which evaluates an al- of 𝑇 + and 𝑇 − , along with the test statistic, offering a clear summary of gorithm’s efficiency in reaching the optimal solution, is analyzed by the results. The analysis reveals significant improvements in the HWWO comparing HWWO’s performance with existing metaheuristic tech- algorithm’s performance, with 𝑇 + accounting for 93.87% and 𝑇 − for niques. In the graph, the 𝑥-axis represents the number of iterations, 6.12% of the evaluated benchmarks (𝑝 < 0.05). These results suggest while the 𝑦-axis shows the average fitness values computed over 1000 33 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Table 25 Outcomes of using unimodal functions. F Measure Optimization algorithm PSO ACO KH DA AHA GWO WOA HWWO 𝐹1 Best 1.709E−13 1.471E−10 1.660E−19 1.835E−13 3.003E−49 3.010E−79 5.763E−60 3.755E−94 Mean 8.644E−11 1.121E−09 6.660E−17 4.992E−12 9.206E−48 2.013E−67 1.847E−57 2.401E−87 Worst 2.483E−10 6.170E−09 5.873E−16 3.849E−11 2.162E−47 8.057E−67 9.399E−57 8.552E−87 St. Dev 1.889E−01 7.223E−04 1.861E−01 3.046E−03 1.418E−47 3.413E−72 2.149E−03 4.027E−72 𝐹2 Best 5.069E−12 8.035E−06 1.803E−34 1.609E−09 7.112E−29 4.925E−55 2.757E−65 2.283E−80 Mean 9.183E−11 1.290E−03 6.257E−33 1.325E−08 1.116E−28 2.350E−52 9.432E−45 5.662E−60 Worst 2.272E−10 5.056E−03 1.230E−32 3.626E−08 1.931E−28 9.217E−52 3.775E−44 2.220E−59 St. Dev 9.594E−11 2.511E−03 6.834E−33 1.611E−08 5.517E−27 3.559E−52 1.088E−44 1.106E−59 𝐹3 Best 5.077E−07 5.783E−02 2.497E−06 3.273E−26 3.156E−05 3.10E−128 6.837E−03 6.389E−28 Mean 3.377E−01 1.123E+00 8.221E−05 1.984E−24 9.296E−03 2.683E−82 1.808E−02 2.711E−27 Worst 1.350E+00 3.310E+00 1.607E−04 7.912E−24 3.513E−02 1.073E−81 4.471E−02 1.080E−26 St. Dev 6.753E−01 1.481E+00 8.191E−05 3.952E−24 1.723E−02 5.367E−82 1.784E−02 5.398E−27 𝐹4 Best 7.275E−04 1.655E−03 5.300E−17 3.102E−05 1.969E−54 2.551E−21 1.388E−62 2.277E−78 Mean 1.654E−02 5.793E−02 8.969E−16 2.358E−04 2.149E−49 1.030E−19 2.346E−43 2.971E−53 Worst 4.365E−02 1.838E−01 1.704E−15 4.706E−04 8.596E−49 3.632E−19 9.346E−43 1.188E−52 St. Dev 1.975E−02 8.624E−02 8.320E−16 2.214E−04 4.297E−49 1.741E−19 4.667E−43 5.942E−53 𝐹5 Best 7.65461E 7.68388E 9.70569E 3.126E−01 6.15363E 4.291E−02 1.18142E 3.113E−04 Mean 7.99255E 8.42474E 5.91556E+02 2.924E+00 6.68606E 7.121E−01 4.65548E 1.267E−03 Worst 8.42156E 9.31820E 1.20253E+03 5.389E+00 7.23273E 1.596E+00 6.10117E 2.696E−03 St. Dev 3.32511E−01 7.36628E−01 5.00585E+02 2.924E+00 6.07412E−1 7.068E−01 2.32924E 1.013E−03 𝐹6 Best 3.543E−01 7.685E−05 3.620E−06 9.22444E−10 2.039E−03 2.907E−16 6.490E−20 1.549E−12 Mean 6.382E−01 6.237E−03 4.380E−06 1.23821E−09 4.113E−03 7.377E−14 3.630E−16 7.981E−11 Worst 8.362E−01 1.772E−02 4.993E−06 1.62791E−09 7.156E−03 2.903E−13 1.324E−15 1.891E−10 St. Dev 2.061E−01 1.444E−13 5.766E−07 2.97023E−10 2.229E−03 8.229E−03 6.436E−16 9.242E−11 𝐹7 Best 3.457E−03 6.312E−03 8.926E−04 9.48224E−04 2.857E−04 3.366E−05 1.03E+4 6.443E−06 Mean 9.097E−03 2.395E−02 1.791E−03 1.97178E−03 6.290E−04 2.899E−04 1.11E+4 2.188E−04 Worst 2.001E−02 5.483E−02 2.728E−03 3.58221E−03 8.085E−04 8.911E−04 1.22E+4 5.489E−04 St. Dev 7.412E−03 2.124E−02 7.973E−04 1.24604E−03 2.388E−04 4.095E−04 3.01E+2 2.247E−04 Table 26 Results of Wilcoxon signed-rank test of Table 25. Problem HWWO vs PSO HWWO vs ACO HWWO vs KH HWWO vs DA HWWO vs AHA HWWO vs GWO HWWO vs WOA Rank Rank Rank Rank Rank Rank Rank 𝐹1 1 1 2 2 2 1 3 𝐹2 2 2 1 4 3 2 1 𝐹3 3 6 4 1 4 3 5 𝐹4 4 4 3 5 1 4 2 𝐹5 7 7 7 7 7 7 6 𝐹6 6 3 5 3 6 5 4 𝐹7 5 5 6 6 5 6 7 p-value 0.0355 0.0013 0.0281 0.0176 0.0262 0.0474 0.0087 𝑇 + = Sum of positive number ranks 28 28 28 28 28 20 24 𝑇 − = Sum of negative number ranks 0 0 0 0 0 8 4 𝑇 = min(𝑇 + , 𝑇 − ) 0 0 0 0 0 8 4 tasks using 100 processors. For clarity, the graph illustrates the average reflecting its well-balanced integration of exploration and exploitation fitness values from 10 independent runs, each evaluated over 500 for enhanced optimization performance. iterations. As shown in Fig. 32, the HWWO algorithm demonstrates rapid convergence toward optimal solutions, outperforming other algo- 7. Conclusions rithms. This superior performance stems from HWWO’s hybrid design, which integrates the strengths of WOA and GWO. By combining these To address the challenge of tasks scheduling in a heterogeneous techniques, HWWO effectively overcomes the limitations of premature distributed computing environment, this research proposes a hybrid and slow convergence inherent in WOA. The figure underscores the lim- meta-heuristic technique called HWWO, which amalgamates the WOA itations of PSO and ACO algorithms, primarily their weak exploitation and the GWO. The paper presents a reliability-based energy-efficient capabilities. Despite iterating extensively through the solution space, scheduling model designed to reduce energy consumption and enhance these algorithms often fail to reach the optimal solution due to an the reliability of applications running on heterogeneous computing imbalance between exploration and exploitation phases. Similarly, DA platforms, all while adhering to strict deadline requirements. The ap- and KH exhibit strong exploratory abilities in the initial stages but plications are elegantly modeled using DAGs. The article proposes a frequently become trapped in local optima, preventing convergence novel scheduling algorithm that combines the WOA and the GWO with to the optimal solution. In contrast, WOA initially outperforms GWO DVFS capabilities, along with an insert-reversed block operation. This in generating promising solutions, but GWO surpasses WOA in later hybrid approach aims to minimize both static and dynamic energy iterations by refining the search process. HWWO, however, demon- consumption. The article presents a refined technique to simultaneously strates steady improvement in fitness value with increasing iterations, tackle the challenges of tasks scheduling on appropriate processors while considering multiple objectives. The proposed method seeks to 34 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 Fig. 32. Comparison of convergence curves of HWWO and literature algorithms. optimize overall energy consumption, computational time, and system HWWO consistently demonstrated the minimal average sensi- reliability concurrently, offering a comprehensive solution to address tivity ratio and rapid convergence towards optimal solutions, these critical factors. Extensive experiments highlight the proposed outperforming existing metaheuristic algorithms. model’s effectiveness in considerably reducing energy consumption and vii A Wilcoxon-signed rank test is utilized to statistically evaluate processing time, increasing system reliability, and maintaining low the effectiveness of the results. complexities. The key contributions based on experimental findings are: viii The run time and space complexities for the proposed method are calculated, both equating to 𝑂(|𝑌 | ∗ |𝑋|) + 𝑂(|𝑌 |). Notably, i The article introduces a hybrid scheduling mechanism, termed in terms of complexities, the proposed algorithm demonstrates HWWO, that integrates SI techniques, specifically WOA and the superior performance compared to other existing algorithms in GWO, to tackle real-world applications effectively. this domain. ii The WOA algorithm exhibits rapid convergence and strikes a balance between exploration and exploitation when solving op- timization problems. However, its encircling search mechanism 7.1. Limitations and future work can occasionally lead it to converge prematurely on local optima. To mitigate this issue, a hybrid approach has been devised by incorporating the GWO, synergizing the capabilities of both In this article, a hybrid model is developed to solve the reliability- optimization techniques. based energy-efficient tasks scheduling problem with multiple objec- iii Extensive evaluations are conducted on real-world FFT and tives. The model successfully reduces total energy consumption com- GE applications to compare the proposed model’s performance pared to existing methods, although it fails to reduce static energy against various state-of-the-art methods. consumption individually. As shown in Tables 12 and 16, the proposed iv The proposed algorithm is rigorously evaluated on real-world HWWO approach exhibits superior energy consumption performance single-objective constrained optimization problems from the for reliability thresholds up to 0.98. However, beyond this threshold, CEC 2020 competition. Comprehensive comparisons are con- the method fails to meet reliability constraints due to the absence of ducted against the competition’s state-of-the-art algorithms, in- fault-tolerance mechanisms, indicating that 𝑅𝑒(𝑔𝑜𝑎𝑙) (𝐺) cannot always cluding SASS, EnMODE, sCMAgES, and COLSHADE. Addition- be satisfied. Addressing these limitations, future research will focus ally, the algorithm’s performance is assessed on a set of uni- on integrating fault-tolerance mechanisms into the hybrid model more modal benchmark test functions and compared to established efficiently. This integration could involve implementing error detection metaheuristic approaches. and correction techniques and robust optimization strategies to ensure v The experiments reveal the proposed algorithm’s superiority continuous and accurate tasks scheduling, even in the presence of over existing state-of-the-art and metaheuristic methods. It ex- faults. cels in energy efficiency, reliability maximization, computation The method is effective within computing environments where time and SLR minimization, CCR optimization, and resource uti- processors are fully connected. To address its limitations, enhancing the lization enhancement across diverse scale conditions and dead- framework involves refining scheduling algorithms and assessing them line constraints. across various workflows such as LIGO, SIPHT, and molecular dynamic vi The effectiveness and scalability of the proposed HWWO method code. Furthermore, the proposed framework’s versatility allows for are assessed through sensitivity analysis and implementation on potential extensions to diverse computing system environments such varying tasks and processor counts. The outcomes revealed that as grid computing, cloud computing, and cluster computing. 35 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 CRediT authorship contribution statement [13] H. Xu, R. Li, C. Pan, K. Li, Minimizing energy consumption with reliability goal on heterogeneous embedded systems, J. Parallel Distrib. Comput. 127 (2019) 44–57, http://dx.doi.org/10.1016/j.jpdc.2019.01.006. Karishma: Writing – original draft, Validation, Software, Resources, [14] L. Zhang, M. Ai, K. Liu, J. Chen, K. Li, Reliability enhancement strategies for Methodology, Investigation, Data curation, Conceptualization. Haren- workflow scheduling under energy consumption constraints in clouds, IEEE dra Kumar: Validation, Supervision, Methodology, Investigation, For- Trans. Sustain. Comput. 9 (2) (2024) 155–169, http://dx.doi.org/10.1109/ mal analysis, Conceptualization. TSUSC.2023.3314759. [15] L. Zhang, K. Li, K. Li, Y. Xu, Joint optimization of energy efficiency and system reliability for precedence constrained tasks in heterogeneous systems, Int. J. Funding Electr. Power Energy Syst. 78 (2016) 499–512, http://dx.doi.org/10.1016/j. ijepes.2015.11.102. The authors declare that no funds, grants, or other support were [16] G. Xie, H. Peng, Z. Li, J. Song, Y. Xie, R. Li, K. Li, Reliability enhancement toward functional safety goal assurance in energy-aware automotive cyber- received during the preparation of this manuscript. physical systems, IEEE Trans. Ind. Informatics 14 (12) (2018) 5447–5462, http://dx.doi.org/10.1109/TII.2018.2854762. Ethics approval and consent to participate [17] L. Ye, Y. Xia, S. Tao, C. Yan, R. Gao, Y. Zhan, Reliability-aware and energy- efficient workflow scheduling in IaaS clouds, IEEE Trans. Autom. Sci. Eng. 20 (3) (2023) 2156–2169, http://dx.doi.org/10.1109/TASE.2022.3195958. This article does not contain any studies with human participants [18] X. Xiao, G. Xie, C. Xu, C. Fan, R. Li, K. Li, Maximizing reliability of energy con- or animals performed by any authors. strained parallel applications on heterogeneous distributed systems, J. Comput. Sci. 26 (2018) 344–353, http://dx.doi.org/10.1016/j.jocs.2017.05.002. Declaration of competing interest [19] G. Xie, Y. Chen, Y. Liu, Y. Wei, R. Li, K. Li, Resource consumption cost minimization of reliable parallel applications on heterogeneous embedded systems, IEEE Trans. Ind. Informatics 13 (4) (2016) 1629–1640, http://dx.doi. The authors declare that they have no known competing finan- org/10.1109/TII.2016.2641473. cial interests or personal relationships that could have appeared to [20] H. Djigal, J. Feng, J. Lu, J. Ge, IPPTS: An efficient algorithm for scientific influence the work reported in this paper. workflow scheduling in heterogeneous computing systems, IEEE Trans. Parallel Distrib. Syst. 32 (5) (2021) 1057–1071, http://dx.doi.org/10.1109/TPDS.2020. 3041829. Data availability [21] Z. Deng, Z. Yan, H. Huang, H. Shen, Energy-aware task scheduling on heterogeneous computing systems with time constraint, IEEE Access 8 (2020) Data will be made available on request. 23936–23950, http://dx.doi.org/10.1109/ACCESS.2020.2970166. [22] Z. Quan, Z.-J. Wang, T. Ye, S. Guo, Task scheduling for energy consumption constrained parallel applications on heterogeneous computing systems, IEEE Trans. Parallel Distrib. Syst. 31 (5) (2020) 1165–1182, http://dx.doi.org/10. References 1109/TPDS.2019.2959533. [23] Y. Hu, J. Li, L. He, A reformed task scheduling algorithm for heterogeneous [1] M. Agarwal, G.M.S. Srivastava, Opposition-based learning inspired particle distributed systems with energy consumption constraints, Neural Comput. Appl. swarm optimization (OPSO) scheme for task scheduling problem in cloud 32 (10) (2020) 5681–5693, http://dx.doi.org/10.1007/s00521-019-04415-2. computing, J. Ambient. Intell. Humaniz. Comput. 12 (10) (2021) 9855–9875, [24] J. Ababneh, A hybrid approach based on grey wolf and whale optimization http://dx.doi.org/10.1007/s12652-020-02730-4. algorithms for solving cloud task scheduling problem, Math. Probl. Eng. 2021 [2] I. Strumberger, N. Bacanin, M. Tuba, E. Tuba, Resource scheduling in cloud (1) (2021) 3517145, http://dx.doi.org/10.1155/2021/3517145. computing based on a hybridized whale optimization algorithm, Appl. Sci. 9 [25] F.W. Ipeayeda, M.O. Oyediran, S.A. Ajagbe, J.O. Jooda, M.O. Adigun, Optimized (22) (2019) 4893, http://dx.doi.org/10.3390/app9224893. gravitational search algorithm for feature fusion in a multimodal biometric [3] H. Kumar, I. Tyagi, Hybrid model for tasks scheduling in distributed real system, Results Eng. 20 (2023) 101572, http://dx.doi.org/10.1016/j.rineng. time system, J. Ambient. Intell. Humaniz. Comput. 12 (2021) 2881–2903, 2023.101572. http://dx.doi.org/10.1007/s12652-020-02445-6. [26] M.O. Oyediran, S.A. Ajagbe, O.S. Ojo, R. Alshahrani, O.O. Awodoye, M.O. [4] Karishma, H. Kumar, A new hybrid particle swarm optimizationalgorithm for Adigun, White shark optimizer via support vector machine for video-based optimal tasks scheduling in distributed computing system, Intell. Syst. Appl. 18 gender classification system, Multimedia Tools Appl. 84 (2025) 34645–34661, (2023) 200219, http://dx.doi.org/10.1016/j.iswa.2023.200219. http://dx.doi.org/10.1007/s11042-024-20500-8. [5] G. Taheri, A. Khonsari, R. Entezari-Maleki, L. Sousa, A hybrid algorithm for [27] Karishma, H. Kumar, GWO based energy-efficient workflow scheduling for task scheduling on heterogeneous multiprocessor embedded systems, Appl. Soft heterogeneous computing systems, Soft Comput. 29 (2025) 3469–3508, http: Comput. 91 (2020) 106202, http://dx.doi.org/10.1016/j.asoc.2020.106202. //dx.doi.org/10.1007/s00500-025-10614-y. [6] G. Taheri, A. Khonsari, R. Entezari-Maleki, M. Baharloo, L. Sousa, Temperature- [28] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Softw. 95 aware dynamic voltage and frequency scaling enabled MPSoC modeling using (2016) 51–67, http://dx.doi.org/10.1016/j.advengsoft.2016.01.008. stochastic activity networks, Microprocess. Microsyst. 60 (2018) 15–23, http: [29] S. Mirjalili, S.M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Softw. 69 //dx.doi.org/10.1016/j.micpro.2018.03.011. (2014) 46–61, http://dx.doi.org/10.1016/j.advengsoft.2013.12.007. [7] B.M.H. Zade, N. Mansouri, M.M. Javidi, SAEA: A security-aware and energy- [30] H.M. Mohammed, S.U. Umar, T.A. Rashid, A systematic and meta-analysis aware task scheduling strategy by parallel squirrel search algorithm in cloud survey of whale optimization algorithm, Comput. Intell. Neurosci. 2019 (1) environment, Expert Syst. Appl. 176 (2021) 114915, http://dx.doi.org/10.1016/ (2019) 8718571, http://dx.doi.org/10.1155/2019/8718571. j.eswa.2021.114915. [31] Z. Xu, Y. Yu, H. Yachi, J. Ji, Y. Todo, S. Gao, A novel memetic whale [8] G. Xie, Y. Chen, X. Xiao, C. Xu, R. Li, K. Li, Energy-efficient fault-tolerant optimization algorithm for optimization, in: Advances in Swarm Intelligence: scheduling of reliable parallel applications on heterogeneous distributed em- 9th International Conference, ICSI 2018, Shanghai, China, June 17-22, 2018, bedded systems, IEEE Trans. Sustain. Comput. 3 (3) (2017) 167–181, http: Proceedings, Part I 9, Springer, 2018, pp. 384–396, http://dx.doi.org/10.1007/ //dx.doi.org/10.1109/TSUSC.2017.2711362. 978-3-319-93815-8_37. [9] B. Hu, Z. Cao, M. Zhou, Energy-minimized scheduling of real-time paral- [32] S. Li, F. Broekaert, Low-power scheduling with DVFS for common RTOS on lel workflows on heterogeneous distributed computing systems, IEEE Trans. multicore platforms, ACM SIGBED Rev. 11 (1) (2014) 32–37, http://dx.doi. Serv. Comput. 15 (5) (2021) 2766–2779, http://dx.doi.org/10.1109/TSC.2021. org/10.1145/2597457.2597461. 3054754. [33] X. Tang, W. Shi, F. Wu, Interconnection network energy-aware workflow [10] S. Chen, Z. Li, B. Yang, G. Rudolph, Quantum-inspired hyper-heuristics for scheduling algorithm on heterogeneous systems, IEEE Trans. Ind. Informatics energy-aware scheduling on heterogeneous computing systems, IEEE Trans. 16 (12) (2020) 7637–7645, http://dx.doi.org/10.1109/TII.2019.2962531. Parallel Distrib. Syst. 27 (6) (2016) 1796–1810, http://dx.doi.org/10.1109/ [34] F. Yao, A. Demers, S. Shenker, A scheduling model for reduced CPU energy, TPDS.2015.2462835. in: Proceedings of IEEE 36th Annual Foundations of Computer Science, IEEE, [11] M. Safari, R. Khorsand, PL-DVFS: combining power-aware list-based scheduling 1995, pp. 374–382, http://dx.doi.org/10.1109/SFCS.1995.492493. algorithm with DVFS technique for real-time tasks in cloud computing, J. [35] M. Safari, R. Khorsand, Energy-aware scheduling algorithm for time-constrained Supercomput. 74 (10) (2018) 5578–5600, http://dx.doi.org/10.1007/s11227- workflow tasks in DVFS-enabled cloud environment, Simul. Model. Pr. Theory 018-2498-z. 87 (2018) 311–326, http://dx.doi.org/10.1016/j.simpat.2018.07.006. [12] K. Li, Energy-efficient task scheduling on multiple heterogeneous computers: [36] Z. Tang, L. Qi, Z. Cheng, K. Li, S.U. Khan, K. Li, An energy-efficient task Algorithms, analysis, and performance evaluation, IEEE Trans. Sustain. Comput. scheduling algorithm in DVFS-enabled cloud environment, J. Grid Comput. 14 1 (1) (2016) 7–19, http://dx.doi.org/10.1109/TSUSC.2016.2623775. (2016) 55–74, http://dx.doi.org/10.1007/s10723-015-9334-y. 36 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 [37] G. Xie, J. Jiang, Y. Liu, R. Li, K. Li, Minimizing energy consumption of [60] A. Askarzadeh, A novel metaheuristic method for solving constrained engineer- real-time parallel applications using downward and upward approaches on ing optimization problems: Crow search algorithm, Comput. Struct. 169 (2016) heterogeneous systems, IEEE Trans. Ind. Informatics 13 (3) (2017) 1068–1078, 1–12, http://dx.doi.org/10.1016/j.compstruc.2016.03.001. http://dx.doi.org/10.1109/TII.2017.2676183. [61] S. Shadravan, H. Naji, V. Bardsiri, The sailfish optimizer: A novel nature- [38] G. Xie, H. Peng, Z. Li, J. Song, Y. Xie, R. Li, K. Li, Reliability enhancement inspired metaheuristic algorithm for solving constrained engineering optimiza- toward functional safety goal assurance in energy-aware automotive cyber- tion problems, Eng. Appl. Artif. Intell. 80 (2019) 20–34, http://dx.doi.org/10. physical systems, IEEE Trans. Ind. Informatics 14 (12) (2018) 5447–5462, 1016/j.engappai.2019.01.001. http://dx.doi.org/10.1109/TII.2018.2854762. [62] X. Chen, L. Cheng, C. Liu, Q. Liu, J. Liu, Y. Mao, J. Murphy, A WOA- [39] A. Javadpour, A.K. Sangaiah, P. Pinto, F. Ja’fari, W. Zhang, A.M.H. Abadi, H. based optimization approach for task scheduling in cloud computing systems, Ahmadi, An energy-optimized embedded load balancing using DVFS computing IEEE Syst. J. 14 (3) (2020) 3117–3128, http://dx.doi.org/10.1109/JSYST.2019. in cloud data centers, Comput. Commun. 197 (2023) 255–266, http://dx.doi. 2960088. org/10.1016/j.comcom.2022.10.019. [63] S. Mangalampalli, G.R. Karri, U. Kose, Multi objective trust aware task schedul- [40] S. Ijaz, E.U. Munir, S.G. Ahmad, M.M. Rafique, O.F. Rana, Energy-makespan ing algorithm in cloud computing using whale optimization, J. King Saud Univ. optimization of workflow scheduling in fog–cloud computing, Computing 103 - Comput. Inf. Sci. 35 (2) (2023) 791–809, http://dx.doi.org/10.1016/j.jksuci. (2021) 2033–2059, http://dx.doi.org/10.1007/s00607-021-00930-0. 2023.01.016. [41] B. Kocot, P. Czarnul, J. Proficz, Energy-aware scheduling for high-performance [64] Z. Deng, D. Cao, H. Shen, Z. Yan, H. Huang, Reliability-aware task scheduling computing systems: A survey, Energies 16 (2) (2023) 890, http://dx.doi.org/ for energy efficiency on heterogeneous multiprocessor systems, J. Supercomput. 10.3390/en16020890. 77 (2021) 11643–11681. [42] Y. Hu, J. Li, L. He, A reformed task scheduling algorithm for heterogeneous [65] M. Abdel-Basset, D. El-Shahat, K. Deb, M. Abouhawwash, Energy-aware whale distributed systems with energy consumption constraints, Neural Comput. Appl. optimization algorithm for real-time task scheduling in multiprocessor systems, 32 (10) (2020) 5681–5693, http://dx.doi.org/10.1007/s00521-019-04415-2. Appl. Soft Comput. 93 (2020) 106349, http://dx.doi.org/10.1016/j.asoc.2020. [43] A. Benoit, M. Hakem, Y. Robert, Fault tolerant scheduling of precedence task 106349. graphs on heterogeneous platforms, in: 2008 IEEE International Symposium on [66] S. Goyal, S. Bhushan, Y. Kumar, A.u.H.S. Rana, M.R. Bhutta, M.F. Ijaz, Y. Son, Parallel and Distributed Processing, 2008, pp. 1–8, http://dx.doi.org/10.1109/ An optimized framework for energy-resource allocation in a cloud environment IPDPS.2008.4536133. based on the whale optimization algorithm, Sensors 21 (5) (2021) http://dx. [44] J. Huang, R. Li, X. Jiao, Y. Jiang, W. Chang, Dynamic DAG scheduling doi.org/10.3390/s21051583. on multiprocessor systems: Reliability, energy, and makespan, IEEE Trans. [67] R. Ghafari, N. Mansouri, Cost-aware and energy-efficient task scheduling based Comput.-Aided Des. Integr. Circuits Syst. 39 (11) (2020) 3336–3347, http: on grey wolf optimizer, J. Mahani Math. Res. 12 (1) (2023) 257–288. //dx.doi.org/10.1109/TCAD.2020.3013045. [68] B.V. Natesha, N. Kumar Sharma, S. Domanal, R.M. Reddy Guddeti, GWOTS: [45] S. Saroja, T. Revathi, N. Auluck, Multi-criteria decision-making for heteroge- Grey wolf optimization based task scheduling at the green cloud data center, neous multiprocessor scheduling, Int. J. Inf. Technol. Decis. Mak. 17 (05) (2018) in: 2018 14th International Conference on Semantics, Knowledge and Grids, 1399–1427, http://dx.doi.org/10.1142/S0219622018500311. SKG, 2018, pp. 181–187, http://dx.doi.org/10.1109/SKG.2018.00034. [46] L. Zhao, Y. Ren, K. Sakurai, Reliable workflow scheduling with less resource [69] N. Arora, R.K. Banyal, A particle grey wolf hybrid algorithm for work- redundancy, Parallel Comput. 39 (10) (2013) 567–585, http://dx.doi.org/10. flow scheduling in cloud computing, Wirel. Pers. Commun. 122 (4) (2022) 1016/j.parco.2013.06.003. 3313–3345, http://dx.doi.org/10.1007/s11277-021-09065-z. [47] G. Xie, X. Xiao, H. Peng, R. Li, K. Li, A survey of low-energy parallel scheduling [70] F.A. Saif, R. Latip, Z.M. Hanapi, K. Shafinah, Multi-objective grey wolf optimizer algorithms, IEEE Trans. Sustain. Comput. 7 (1) (2022) 27–46, http://dx.doi.org/ algorithm for task scheduling in cloud-fog computing, IEEE Access 11 (2023) 10.1109/TSUSC.2021.3057983. 20635–20646, http://dx.doi.org/10.1109/ACCESS.2023.3241240. [48] S. Safari, M. Ansari, H. Khdr, P. Gohari-Nazari, S. Yari-Karin, A. Yeganeh- [71] N. Bacanin, T. Bezdan, E. Tuba, I. Strumberger, M. Tuba, M. Zivkovic, Task Khaksar, S. Hessabi, A. Ejlali, J. Henkel, A survey of fault-tolerance techniques scheduling in cloud computing environment by grey wolf optimizer, in: 2019 for embedded systems from the perspective of power, energy, and thermal is- 27th Telecommunications Forum, TELFOR, 2019, pp. 1–4, http://dx.doi.org/ sues, IEEE Access 10 (2022) 12229–12251, http://dx.doi.org/10.1109/ACCESS. 10.1109/TELFOR48224.2019.8971223. 2022.3144217. [72] R. Masadeh, A. Sharieh, B. Mahafzah, Humpback whale optimization algorithm [49] M. Cui, A. Kritikakou, L. Mo, E. Casseau, Near-optimal energy-efficient partial- based on vocal behavior for task scheduling in cloud computing, Int. J. Adv. duplication task mapping of real-time parallel applications, J. Syst. Archit. 134 Sci. Technol. 13 (3) (2019) 121–140. (2023) 102790, http://dx.doi.org/10.1016/j.sysarc.2022.102790. [73] J.P.P.M. Sanaj MS, V. Alappatt, Profit maximization based task scheduling [50] I. Strumberger, N. Bacanin, S. Tomic, M. Beko, M. Tuba, Static drone placement in hybrid clouds using whale optimization technique, Inf. Secur. J.: A Glob. by elephant herding optimization algorithm, in: 2017 25th Telecommunica- Perspect. 29 (4) (2020) 155–168, http://dx.doi.org/10.1080/19393555.2020. tion Forum, TELFOR, 2017, pp. 1–4, http://dx.doi.org/10.1109/TELFOR.2017. 1716116. 8249469. [74] N. Rana, M.S.A. Latiff, S.M. Abdulhamid, S. Misra, A hybrid whale opti- [51] E. Tuba, E. Dolicanin, M. Tuba, Chaotic brain storm optimization algorithm, in: mization algorithm with differential evolution optimization for multi-objective H. Yin, Y. Gao, S. Chen, Y. Wen, G. Cai, T. Gu, J. Du, A.J. Tallón-Ballesteros, virtual machine scheduling in cloud computing, Eng. Optim. 54 (12) (2022) M. Zhang (Eds.), Intelligent Data Engineering and Automated Learning – IDEAL 1999–2016, http://dx.doi.org/10.1080/0305215X.2021.1969560. 2017, Springer International Publishing, Cham, 2017, pp. 551–559. [75] K. Sreenu, M. Sreelatha, W-scheduler: whale optimization for task scheduling [52] M. Subotic, M. Tuba, N. Bacanin, D. Simian, Parallelized cuckoo search in cloud computing, Clust. Comput. 22 (2019) 1087–1098, http://dx.doi.org/ algorithm for unconstrained optimization, in: Proceedings of the 5th WSEAS 10.1007/s10586-017-1055-5. Congress on Applied Computing Conference, and Proceedings of the 1st [76] A. Chhabra, S.K. Sahana, N.S. Sani, A. Mohammadzadeh, H.A. Omar, International Conference on Biologically Inspired Computation, BICA ’12, World Energy-aware bag-of-tasks scheduling in the cloud computing system using hy- Scientific and Engineering Academy and Society (WSEAS), Stevens Point, brid oppositional differential evolution-enabled whale optimization algorithm, Wisconsin, USA, 2012, pp. 151–156. Energies 15 (13) (2022) http://dx.doi.org/10.3390/en15134571. [53] A. Mohammadi, F. Sheikholeslam, S. Mirjalili, Nature-inspired metaheuristic [77] L. Jia, K. Li, X. Shi, Cloud computing task scheduling model based on improved search algorithms for optimizing benchmark problems: Inclined planes system whale optimization algorithm, Wirel. Commun. Mob. Comput. 2021 (1) (2021) optimization to state-of-the-art methods, Arch. Comput. Methods Eng. 30 (1) 4888154, http://dx.doi.org/10.1155/2021/4888154. (2023) 331–389, http://dx.doi.org/10.1007/s11831-022-09800-0. [78] N. Manikandan, N. Gobalakrishnan, K. Pradeep, Bee optimization based ran- [54] D. Karaboga, An idea based on honey bee swarm for numerical optimization, dom double adaptive whale optimization model for task scheduling in cloud Tech. Rep. 200, Technical report-tr06, Erciyes University, Engineering Faculty, computing environment, Comput. Commun. 187 (2022) 35–44, http://dx.doi. Computer Engineering Department, 2005, pp. 1–10. org/10.1016/j.comcom.2022.01.016. [55] M. Dorigo, M. Birattari, T. Stutzle, Ant colony optimization, IEEE Comput. Intell. [79] V. Punyakum, K. Sethanan, K. Nitisiri, R. Pitakaso, Hybrid particle swarm Mag. 1 (4) (2006) 28–39, http://dx.doi.org/10.1109/MCI.2006.329691. and whale optimization algorithm for multi-visit and multi-period dynamic [56] R. Rajabioun, Cuckoo optimization algorithm, Appl. Soft Comput. 11 (8) (2011) workforce scheduling and routing problems, Mathematics 10 (19) (2022) http: 5508–5518, http://dx.doi.org/10.1016/j.asoc.2011.05.008. //dx.doi.org/10.3390/math10193663. [57] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of [80] K. Pradeep, L.J. Ali, N. Gobalakrishnan, C.J. Raman, N. Manikandan, CWOA: ICNN’95 - International Conference on Neural Networks, vol. 4, 1995, pp. Hybrid Approach for Task Scheduling in Cloud Environment, Comput. J. 65 (7) 1942–1948, http://dx.doi.org/10.1109/ICNN.1995.488968. (2021) 1860–1873, http://dx.doi.org/10.1093/comjnl/bxab028. [58] F. MiarNaeimi, G. Azizyan, M. Rashki, Horse herd optimization algorithm: A [81] N. Manikandan, A. Pravin, Hybrid resource allocation and task scheduling nature-inspired algorithm for high-dimensional optimization problems, Knowl.- scheme in cloud computing using optimal clustering techniques, Int. J. Serv. Based Syst. 213 (2021) 106711, http://dx.doi.org/10.1016/j.knosys.2020. Oper. Informatics 10 (2) (2019) 104–121, http://dx.doi.org/10.1504/IJSOI. 106711. 2019.103403. [59] A.H. Gandomi, A.H. Alavi, Krill herd: A new bio-inspired optimization algo- [82] P. Albert, M. Nanjappan, WHOA: Hybrid based task scheduling in cloud rithm, Commun. Nonlinear Sci. Numer. Simul. 17 (12) (2012) 4831–4845, computing environment, Wirel. Pers. Commun. 121 (3) (2021) 2327–2345, http://dx.doi.org/10.1016/j.cnsns.2012.05.010. http://dx.doi.org/10.1007/s11277-021-08825-1. 37 Karishma and H. Kumar Computer Standards & Interfaces 97 (2026) 104106 [83] P. Gupta, S. Bhagat, D.K. Saini, A. Kumar, M. Alahmadi, P.C. Sharma, Hybrid [92] K. Deep, H. Mebrahtu, Combined mutation operators of genetic algorithm for whale optimization algorithm for resource optimization in cloud E-healthcare the travelling salesman problem, Int. J. Comb. Optim. Probl. Informatics 2 (3) applications, Comput. Mater. Contin. 71 (3) (2022) 5659–5676, http://dx.doi. (2011) 1–23. org/10.32604/cmc.2022.023056. [93] M. Abdel-Basset, G. Manogaran, D. El-Shahat, S. Mirjalili, RETRACTED: A [84] S. Mangalampalli, G.R. Karri, G.N. Satish, Efficient workflow scheduling algo- hybrid whale optimization algorithm based on local search strategy for the rithm in cloud computing using whale optimization, Procedia Comput. Sci. 218 permutation flow shop scheduling problem, Future Gener. Comput. Syst. 85 (2023) 1936–1945, http://dx.doi.org/10.1016/j.procs.2023.01.170. (2018) 129–145, http://dx.doi.org/10.1016/j.future.2018.03.020. [85] F.S. Gharehchopogh, H. Gholizadeh, A comprehensive survey: Whale optimiza- [94] A. Kumar, S. Das, I. Zelinka, A self-adaptive spherical search algorithm for tion algorithm and its applications, Swarm Evol. Comput. 48 (2019) 1–24, real-world constrained optimization problems, in: Proceedings of the 2020 http://dx.doi.org/10.1016/j.swevo.2019.03.004. Genetic and Evolutionary Computation Conference Companion, GECCO ’20, [86] H. Topcuoglu, S. Hariri, M.-Y. Wu, Performance-effective and low-complexity Association for Computing Machinery, New York, NY, USA, 2020, pp. 13–14, task scheduling for heterogeneous computing, IEEE Trans. Parallel Distrib. Syst. http://dx.doi.org/10.1145/3377929.3398186. 13 (3) (2002) 260–274, http://dx.doi.org/10.1109/71.993206. [95] A. Kumar, S. Das, I. Zelinka, A modified covariance matrix adaptation evolution [87] Z. Quan, Z.-J. Wang, T. Ye, S. Guo, Task scheduling for energy consumption strategy for real-world constrained optimization problems, in: Proceedings constrained parallel applications on heterogeneous computing systems, IEEE of the 2020 Genetic and Evolutionary Computation Conference Companion, Trans. Parallel Distrib. Syst. 31 (5) (2020) 1165–1182, http://dx.doi.org/10. GECCO ’20, Association for Computing Machinery, New York, NY, USA, 2020, 1109/TPDS.2019.2959533. pp. 11–12, http://dx.doi.org/10.1145/3377929.3398185. [88] G. Xie, X. Xiao, R. Li, K. Li, Schedule length minimization of parallel appli- [96] K.M. Sallam, S.M. Elsayed, R.K. Chakrabortty, M.J. Ryan, Improved multi- cations with energy consumption constraints using heuristics on heterogeneous operator differential evolution algorithm for solving unconstrained problems, distributed systems, Concurr. Comput.: Pr. Exp. 29 (16) (2017) e4024, http: in: 2020 IEEE Congress on Evolutionary Computation, CEC, 2020, pp. 1–8, //dx.doi.org/10.1002/cpe.4024. http://dx.doi.org/10.1109/CEC48606.2020.9185577. [89] N. Singh, H. Hachimi, A new hybrid whale optimizer algorithm with mean [97] J. Gurrola-Ramos, A. Hernàndez-Aguirre, O. Dalmau-Cedeño, COLSHADE for strategy of grey wolf optimizer for global optimization, Math. Comput. Appl. real-world single-objective constrained optimization problems, in: 2020 IEEE 23 (1) (2018) http://dx.doi.org/10.3390/mca23010014. Congress on Evolutionary Computation, CEC, 2020, pp. 1–8, http://dx.doi.org/ [90] S. Sandokji, F. Eassa, Dynamic variant rank HEFT task scheduling algo- 10.1109/CEC48606.2020.9185583. rithm toward exascle computing, Procedia Comput. Sci. 163 (2019) 482–493, [98] A. Kumar, G. Wu, M.Z. Ali, R. Mallipeddi, P.N. Suganthan, S. Das, A test- http://dx.doi.org/10.1016/j.procs.2019.12.131, 16th Learning and Technology suite of non-convex constrained optimization problems from the real-world Conference 2019Artificial Intelligence and Machine Learning: Embedding the and some baseline results, Swarm Evol. Comput. 56 (2020) 100693, http: Intelligence. //dx.doi.org/10.1016/j.swevo.2020.100693. [91] G. Xie, G. Zeng, R. Li, K. Li, Energy-aware processor merging algorithms for [99] R.F. Woolson, Wilcoxon signed-rank test, in: Wiley Encyclopedia of Clinical deadline constrained parallel applications in heterogeneous cloud computing, Trials, John Wiley & Sons, Ltd, 2008, pp. 1–3, http://dx.doi.org/10.1002/ IEEE Trans. Sustain. Comput. 2 (2) (2017) 62–75, http://dx.doi.org/10.1109/ 9780471462422.eoct979. TSUSC.2017.2705183. [100] SourceForge, Task graph generator, 2015, URL https://sourceforge.net/projects/ taskgraphgen/. 38