3  Background

Combinatorial problems involve problems whose goal is to find the best arrangement or selection from a finite set of items. These problems mainly focus on assignment, sorting, selection, and optimization tasks such as scheduling and routing. These challenges are common in multiple disciplines, such as computer science, operations research, bioinformatics, and artificial intelligence (Biere et al. 2021). Many combinatorial problems fall into the categories of NP-complete or NP-hard, meaning they can be significantly computationally expensive.

Common examples of these problems include boolean satisfiability (SAT), integer programming (IP), quantified boolean formulas (QBF), constraint satisfaction problem (CSP), and any other NP-complete problems like the 21 Karp NP-complete problems (Karp 1972). Additionally, NP-hard problems such as the traveling salesman (TSP), answer set programming (ASP), scheduling, and mixed-integer programming (MIP) fall into this category. Over several decades, various algorithms and complex mathematical methods have been developed to solve these problems, which mainly deal with discrete optimization.

Every year, competitions are held to compare the proposed algorithms. For example, the SAT community organized annual SAT competitions to evaluate and benchmark algorithms on various tracks and domains (Tomáš Balyo, Heule, and Järvisalo 2017). Similarly, annual QBF solver evaluations (QBFEVAL), MiniZinc Challenge, ASP Competition, and International Planning Competition exist, focusing on quantified Boolean formulas, constraint programming, ASP and planning solvers, respectively (Pulina and Seidl 2019; Stuckey et al. 2014; GEBSER, MARATEA, and RICCA 2019; Taitler et al. 2024).

Many solvers have been proposed and continue to evolve, leading to significant improvements in both efficiency and effectiveness. In addition to exact solvers (Woeginger 2002), researchers have also explored approximation algorithms (Williamson and Shmoys 2011) and heuristics (Pearl 1984) as alternatives to achieve solutions close to the optimum, particularly in cases where exact solutions are computationally infeasible. These methods often leverage problem-specific knowledge to guide the search process, and they often sacrifice optimality for speed and produce suboptimal solutions.

Moreover, the inherent difficulty of these problems means that no polynomial-time solutions are known, and there is no one-size-fits-all solution to solve them. The "No Free Lunch" theorem formalizes the idea that, when considering every possible problem, no algorithm outperforms others on average; they all have equivalent performance (Wolpert and Macready 1997). It is the problem-specific performance that makes some algorithms appear to be better suited for certain domains (Wolpert and Macready 1997). The performance of any algorithm is limited by the specific characteristics of the problem instance it encounters. The results of (Nudelman et al., n.d.) contribute to a deeper understanding of the relationship between the characteristics of the instance (that is, the number of clauses and variables in the SAT), which is known to correlate with the difficulty of the problem.

In the realm of algorithm development, both sequential and parallel solvers have been extensively studied. Theoretically, many problem-solving approaches were originally sequential. For instance, in the SAT domain, Davis-Putnam-Loveland-Logemann (DPLL) algorithm (M. Davis, Logemann, and Loveland 1962) was initially designed to be sequential. Sequential solvers generally operate without awareness of available resources and do not dynamically adapt to them. The results of (Hölldobler, Manthey, and Saptawijaya 2010) have revealed that the performance and efficiency of a sequential solver depend heavily on hardware resources and low-level processing. This underscores the need to utilize parallel resources in problem-solving to improve the performance.

3.1 Parallel Solvers

Parallel solvers then emerged as multicore architectures became increasingly accessible. Initially, the first parallel SAT solvers utilized single-core CPUs, with multiple units communicating over a network and employing the leader-follower model (Hölldobler et al. 2011). This approach is showcased by PMSat, a parallel version of the sequential MiniSAT solver (Gil, Flores, and Silveira 2009), as well as the PSATO solvers (ZHANG, BONACINA, and HSIANG 1996) and PaSAT (Sinz, Blochinger, and Küchlin 2001), which introduced search-space partitioning between processing units communicating over MPI in distributed systems (Hyvärinen, Junttila, and Niemelä 2010). Additionally, advancements in algorithm design made the development of parallel solvers possible. For example, in the SAT domain, the transition from DPLL solvers to Conflict-Driven Clause Learning (CDCL) solvers (Biere et al. 2009), divide-and-conquer techniques (Le Frioux 2019), and the implementation of restart strategies (Hyvärinen, Junttila, and Niemelä 2008) facilitated the parallelization of existing algorithm designs. These design let the solvers share the learned clauses or split search spaces between processing units.

Following the emergence of shared memory architectures, parallel solvers started to exploit multicore CPUs with shared memory among cores (Singer and Vagner 2006). Modern parallel solvers inherited the methods developed from earlier distributed parallelization approaches. For example, the parallel SAT solver, Plingeling (Biere 2012), inherits its algorithm design from the sequential Lingeling solver and incorporates a clause-sharing technique among cores for SAT solving. However, in earlier versions of Plingeling, despite using shared-memory systems, clauses were copied by each core. As noted in (Biere 2013; Aigner et al. 2013), the memory used by each processing unit was physically separated, leading to an n-fold increase in memory usage.

Other solvers, such as the one proposed in (Hyvarinen and Wintersteiger 2012; Manthey 2011), used a more finer granularity in their parallelization approach where multiple cores exchange clauses to solve the problem; however, these methods were not scalable to a high number of cores and resulted in limited improvement. With regard to modern graphical processing unit (GPU) and field-programmable gate array (FPGA) computing, several solutions have been proposed (Redekopp and Dandalis 2000; Zhong et al. 1998; J. D. Davis et al. 2008; Gulati et al. 2009; Osama, Wijs, and Biere 2023, 2021; Collevati, Dovier, and Formisano 2022; Meyer et al. 2010). However, some have not demonstrated promising results and while some show potential, discussions of these solvers are outside the focus of this thesis.

Despite all the developments in parallel computing and parallel algorithm designs, the transition from sequential to parallel solvers remains highly challenging, if not impossible, in some cases. Most solvers are originally designed for sequential execution; therefore, adapting them to parallel environments often requires a thorough understanding of the problem domain, algorithm design, and many core systems.

A number of works have shown that this problem can be overcome by improving performance without directly developing new solvers. Instead, existing solvers can be used and combined in a portfolio, selecting the appropriate solver(s) (Kotthoff 2014; Xu et al. 2008; Kadıoglu, Malitsky, and Sellmann, n.d.). In cases where multiple solvers are chosen, they can be run in a schedule, in parallel, or in combination with each other to achieve superior performance.

Algorithm Selection using ML models

3.2 Algorithm Selection

Algorithm selection is a meta-algorithmic technique that was first introduced by John Rice (Rice 1976) in 1976. The algorithm selection problem is strategically choosing the most suitable algorithm from a set of candidates to solve a specific problem instance using the features and characteristics of the problem instances. This problem is closely related to general optimization theory, as both involve finding the best solution among a set of options (Rice 1976). Algorithm selection is an important task in optimizing the performance of computationally expensive tasks, particularly those that can be translated into combinatorial problems (Kotthoff 2014), NP-complete and NP-hard problems (Musliu and Schwengerer 2013; Sitaru and Raschip 2022).

During the last two decades, the algorithm selection problem has attracted significant attention from both researchers and practitioners as it is relevant in various domains, including high-performance computing (Ciorba et al. 2023), sorting (Somshubra Majumdar 2016), probabilistic inference (Guo and Hsu 2003; Kocamaz 2013), software validation (Richter et al. 2020), software verification (Wang et al. 2021; Leeson and Dwyer 2024), software package dependency management (Nichols et al. 2023) and data mining (Ewald, Uhrmacher, and Saha 2010; Batista dos Santos and Merschmann 2020). Algorithm selection is also applicable in the fields of meta-learning and automated machine learning (AutoML), where the goal is to build a meta-model that recommends the best machine learning algorithm, configurations and hyper-parameters for a given training task (Brazdil et al. 2022; Vanschoren 2019; Hutter, Kotthoff, and Vanschoren 2019). In algorithm selection problem, the search space is a discrete set of algorithms, while in problems such as the hyperparameter optimization (HPO) and the combination algorithm selection and hyperparameter optimization (CASH) problem (Thornton et al. 2013), the search spaces are typically continuous or heterogeneous configuration spaces (Brazdil et al. 2022).

Algorithm selection is sometimes known as portfolio-based algorithm selection in the literature (Xu, Hoos, and Leyton-Brown 2010; Xu et al. 2011), and is used in combination with automatic algorithm configuration. Algorithm Configuration is a specialized subset of algorithm selection (Hutter, López-Ibáñez, et al. 2014) in which algorithms with varying configuration parameters build a portfolio. The configuration parameters of these algorithms are dynamically adjusted based on the problem instance. This allows algorithms to adapt to the specifics of a problem, potentially improving their performance. This approach involves using different configurations of an algorithm to design a portfolio of solvers and choose the appropriate algorithm to solve problems. Focusing on the literature on algorithm configuration techniques is beyond the scope of this chapter.

A traditional approach to address algorithm selection is the "winner-take-all" or per-set algorithm selection strategy (Kerschke et al. 2019), where different algorithms are evaluated based on their performance across a problem distribution, and the one with the lowest average runtime is chosen. However, this per-distribution selection often leads to the neglect of algorithms that may not perform well on average, but could excel in specific instances (Leyton-Brown et al. 2003). Per-instance algorithm selection, however, has proven effective in numerous problem solving scenarios (Bischl et al. 2016).

Instance-based selection of an effective algorithm is inherently complex, and evaluation of numerous algorithms against various performance criteria is needed, which can lead to significant computational demands. In addition, the abundance of algorithms designed to address different types of problem has underscored the importance of developing effective strategies for selecting the best algorithm for a given context. The field of algorithm selection has evolved significantly to address these challenges and there is a wide choice of algorithm selection techniques available in the literature.

One key component of per-instance algorithm selection is feature selection. In order to improve the design and selection of algorithms, measuring the difficulty of instances and identifying features of the instance that contribute to its complexity is a fundamental task (Smith-Miles and Lopes 2012; Cárdenas-Montes 2016). Different studies have concluded that problem instance features contribute distinctively to the performance of different solvers. (Leyton-Brown, Nudelman, and Shoham 2002) used statistical regression models to understand the empirical hardness of NP-hard problems, which is believed to help tailor algorithms based on the hardness of instances. (Nudelman et al., n.d.) also contributed to understanding the impact of the features of SAT instances on the difficulty of the problem. In short, the literature pertaining to the study of instance features strongly suggests that solvers need to be either designed to address a specific instance or carefully selected to solve the particular problem.

For various types of problem, numerous studies have focused on collecting and identifying features to aid in training performant algorithm selectors. In the domain of SAT, works such as (Nudelman et al., n.d.; Xu et al. 2008; Hutter, Xu, et al. 2014) have made significant contributions by extracting features from the CNF formulas of SAT instances. More recently, (Shavit and Hoos 2024) revised the SATzilla feature extractor, showing notable performance improvements with the updated SAT features. In the field of AI planning, studies such as (Howe et al. 2000; Fawcett et al. 2014; Roberts et al. 2008; Cenamor, De La Rosa, and Fernández 2013) have contributed by extracting features from the PDDL format of AI planning instances. Although most studies focus on instance features, some have also explored extracting algorithm-specific features and incorporating them into performance model training (Pulatov et al. 2022). Algorithm selection using machine learning and statistical analysis is made possible through the extraction and utilization of these features.

Feature selection is only appropriate if its calculation time is considerably less than the time needed to run all the algorithms. (Carchrae and Beck 2005) explored low-knowledge techniques that focus only on observed performance improvement, demonstrating that effective algorithm selection does not always require deep domain knowledge and complex feature extractions and predictive models. In addition, in scenarios where traditional meta-features are hard to obtain, such as image segmentation tasks, verification-based approaches could be used (Lukac and Kameyama 2023).

There is extensive literature on per-instance algorithm selection approaches, developed to resolve various NP-complete and NP-hard problems, including SAT, TSP, and other related challenges (Xu et al. 2008, 2012; Kerschke et al. 2018, 2019; Kotthoff 2014). Some methods are offline, where solvers are selected before solving the problems (e.g., (Xu et al. 2008)), while others are online, where solvers are continuously selected during the solving process (e.g., (Arbelaez, Hamadi, and Sebag 2009)).

A large number of studies in the broader literature have examined machine learning-based approaches to tackle the algorithm selection problem, ranging from simple classifiers to complex ensemble approaches. These studies focus on using machine learning algorithms to learn the performance mapping from problem instances to algorithms by extracting features from instances (Kotthoff 2014). To name a few, (Guo and Hsu 2005) has focused on training two classifiers to predict the appropriate exact and approximation algorithms to solve the most probable explanation (MPE) problems. Rule-based systems also play a role in the literature using rules or decision trees to assign algorithms to problem instances based on extracted features (Ali and Smith 2006). (Lieder et al. 2014) considers algorithm selection as a special case of metareasoning and proposes a model for human cognitive strategy selection to solve the sorting problem.

In the ASLib paper, different regression and classification methods are also compared, with the results showing that the RandomForest regression is superior to other methods (Bischl et al. 2016). Before the implementation of the ASLib data format and library, algorithm selection studies were scattered and many were unaware of each other, so comparing different approaches was challenging due to the absence of a standardized format. ASLib was proposed to standardize model training and to collect benchmarking data in a structured manner which facilitated comparisons (Bischl et al. 2016). In response to the need for a fair comparison of different algorithm selection methods, the ICON Challenge on Algorithm Selection was held in 2015 and 2017 (M. Lindauer, van Rijn, and Kotthoff 2019). In 2015, the Zilla system (Xu et al. 2008) was the winner among eight different methods. In the 2017 challenge, ASAP.v2 (Gonard, Schoenauer, and Sebag 2019) was the overall winner of eight other methods. The comparison in this competition was conducted using the ASLib format and library."

Other efforts exist which have not specifically focused on improving algorithm selector choices, but rather on making the decision-making process more intuitive or reducing the cost of performance model training. Some efforts have aimed to improve the transparency of algorithm selection through explainability methods. For example, (Visentin et al. 2024) employed iterative feature selection and SHAP analysis to explain selection decisions. Additionally, the cost of training algorithm selection models can be high due to the need to run candidate algorithms on a representative set of training instances. (Kuş et al. 2024) reduces training costs by selecting a subset of training instances using active learning, with timeout predictors and progressively increasing timeouts to further minimize costs. (Brighton and Mellish 2002) has also focused on instance selection which means focusing on choosing a representative set of instances for training, which can help identify the most suitable algorithm and improve algorithm selector efficiency.

3.3 Algorithm Portfolios and Scheduling

In response to the reality that no single algorithm can effectively address all types of problems, the concept of utilizing a portfolio emerged (C. Gomes and Selman 2001). Drawing from the concept of risk diversification in finance, as discussed in (Huberman, Lukose, and Hogg 1997), it is generally wiser to distribute investments across multiple options rather than concentrating on a single one.

Similarly, to optimize problem solving, an algorithm portfolio includes a diverse set of algorithms with complementary performance characteristics, ensuring that at least one will be effective in solving any given instance. These algorithms often have different probability distributions for runtimes. We can leverage their strengths by combining them into algorithm portfolio approaches to take advantage of differences in probability distributions to enhance overall efficiency as proposed by (C. Gomes and Selman 2001; C. P. Gomes and Selman 1997).

Subsequently, this portfolio can be leveraged through various methods to use the strengths of multiple algorithms and ultimately reduces risk (C. Gomes and Selman 2001; Huberman, Lukose, and Hogg 1997; C. P. Gomes and Selman 1997; Malitsky et al. 2014). Different portfolio approaches have been proposed to effectively harness the capabilities of multiple complementary algorithms. An approach already discussed in this chapter is algorithm selection, where the objective is to select a single suitable algorithm per instance. Another approach, plain portfolio selection, includes some algorithms that can be executed in parallel on a multicore machine (Malitsky et al. 2012; M. Lindauer, Bergdoll, and Hutter 2016; H. H. Hoos et al. 2015; Roussel 2012; Wotzlaw et al. 2012) or sequentially on a single processor by splitting the available time between solvers (Kadioglu et al. 2011; T. M. Lindauer 2014; M. Lindauer, Bergdoll, and Hutter 2016; Malitsky et al. 2013; Xu et al. 2008; O’Mahony et al. 2008; Amadini, Gabbrielli, and Mauro 2018; H. Hoos, Lindauer, and Schaub 2014; Cenamor, De La Rosa, and Fernández 2016). In addition, numerous alternative approaches have been developed to enable algorithms in the portfolio to cooperate with each other, such as clause sharing, clause exchange, and search space splitting. Ultimately, these approaches outperform any individual algorithm.

In the plain sequential portfolio domain, over time, a vast body of literature has developed successful sequential portfolio solvers. In particular, 3S (Kadioglu et al. 2011) and CSHC (Malitsky et al. 2013) both secured gold medals in the 2011 and 2013 SAT competitions, respectively. SATZilla (Xu et al. 2008) won the SAT Challenge 2012, CPHydra (O’Mahony et al. 2008) was the winner of Intetnational Constraint Solver Competition 2008, and Sunny-CP won MiniZinc Challenge in 2015 (Amadini, Gabbrielli, and Mauro 2018). In addition, Claspfolio (H. Hoos, Lindauer, and Schaub 2014) earned the gold medal in both the ASP 2009 and 2011 competitions, and IBaCOP2 (Cenamor, De La Rosa, and Fernández 2016) won the Sequential Satisficing Track in the 2014 International Planning Competition.

3S (Kadioglu et al. 2011) combines solver selection and scheduling. It trains a performance model using K-Nearest Neighbors (KNN) and improves the KNN approach with Distance-Based Weighting and Clustering-Based Adaptive Neighborhood Size. Additionally, they enhanced the model by integrating solver scheduling and testing various approaches such as static scheduling, dynamic scheduling, fixed-time scheduling, and semistatic scheduling. Semi-static scheduling performed best, where 90% of the time was allocated to the solver suggested by the KNN model, and 10% to a static solver schedule, which chose the top solvers averaged over all instances and allocated time statically.

CSHC (Malitsky et al. 2013) selects a portfolio of solvers using a classification method known as cost-sensitive hierarchical clustering and then sequentially allocates time to each solver. In short, the cost-sensitive hierarchical clustering method selects algorithms by grouping problem instances based on their features and the cost of misclassification. It starts with all instances in one cluster and splits them to maximize agreement on the best algorithm within each cluster. Clusters with too few instances are discarded, and merging is considered if it improves performance.

In SATZilla (Xu et al. 2008), seven solvers are selected by manually analyzing competition results to identify algorithms that perform best on specific subsets of instances. An empirical hardness model, trained on instance features, is then used to select the most appropriate solver for each instance. Additionally, SATZilla employs presolvers, typically two, that run before the main solver, quickly resolving instances that are easy to solve. For harder instances which did not get resolved in the presolver phase, one solver is chosen from the seven based on the model’s predictions, and the remaining time is allocated to this solver based on its expected performance.

CPHydra (O’Mahony et al. 2008) is another sequential algorithm portfolio approach designed to solve constraint satisfaction problems using case-based reasoning (CBR). CBR is a lazy machine learning method in which no model is explicitly trained; instead, the selection is based on past cases and experiences. CPHydra uses syntactic features, such as the average and maximum domain size, and solver-specific features, including recorded search statistics, number of explored nodes, and runtime, which describe the problem instance. Subsequently, the KNN algorithm is employed to query the top k (with k set to 10) similar cases to the current problem instance to choose the dominated solvers. The available CPU time limit is then divided among multiple solvers, in contrast to approaches such as SATzilla, which rely on a single solver. This division is formulated as a small knapsack problem with the aim of maximizing the probability of finding a solution. The scheduling is computed to maximize the number of similar cases that can be solved using the assigned schedule.

Claspfolio (Gebser et al. 2011) considers multiple configurations of the clasp ASP solver and uses support vector regression (SVR) to select suitable solvers. Claspfolio is composed of four key parts: an ASP grounder, gringo; a lightweight clasp solver called claspre (which extracts features from the problem instance); a scoring mechanism that evaluates clasp configurations based on the extracted features; and finally, the configuration with the highest score is selected to solve the instance. There is no explicit scheduling in the original Claspfolio approach, but it is considered a sequential portfolio approach because it selects one solver configuration and runs it sequentially. (H. Hoos, Lindauer, and Schaub 2014) later introduced Claspfolio2, which includes a solver schedule based on a pre-solving method. This approach employs a timeout-minimal pre-solving schedule using ASPEED (H. H. Hoos et al. 2015), where each solver is assigned a specific time budget to solve a given instance. Although ASPEED is capable of handling multicore scheduling, Claspfolio2 primarily focuses on sequential scheduling, maintaining the original approach’s emphasis on running solvers one after another.

IBaCOP (Cenamor, De La Rosa, and Fernández 2016) is a portfolio approach in the planning domain. It designs a portfolio by creating a per-instance configurable portfolio rather than using a fixed configuration. The design process involves selecting a set of planners (five in the experiments) to narrow down those likely to perform well for a given task. This selection is done using Pareto efficiency, choosing planners that offer a balance between different performance criteria without being dominated by others. The selected planners are then allocated a portion of time on a single processor. IBaCOP (Cenamor et al. 2014) was later developed based on the previous version, but more planners were included in the method, and a performance prediction model was trained instead of using Pareto efficiency for choosing the sub-portfolio, making the selection dynamic.

SNNAP (Collautti et al. 2013) is a KNN-based approach for solving SAT problems that combines supervised and unsupervised learning to create a solver portfolio. It uses random forests to predict the runtime of solvers on specific instances, and based on these predictions, it selects similar instances from the training set using KNN, choosing the top-performing solvers predicted to perform well on the given instance. The value of k is defined by the user, with the top three solvers being selected. SNNAP has been shown to outperform ISAC (Kadioglu et al. 2010), a method for the configuration of algorithm based on instance that ultimately selects a single algorithm, in experiments. The available time is divided between the selected solvers, and they are run sequentially.

Sunny (Amadini, Gabbrielli, and Mauro 2014) is a portfolio designed to solve constraint satisfaction problems (CSP). It uses KNN to select a subset of training instances that are closest to the problem at hand. It then computes a schedule of solvers that solve the maximum number of instances within a specific time limit, allocating a portion of time to each solver to create a schedule. It also includes a backup solver to handle exceptional cases. Sunny-CP (Amadini, Gabbrielli, and Mauro 2018) is based on Sunny and extends its capabilities to handle both CSPs and constraint optimization problems (COPs). Sunny-CP includes an improved feature extraction tool compared to Sunny. Both Sunny and Sunny-CP are sequential portfolio designs.

AutoFolio (M. Lindauer et al. 2015) is another effort that integrates various algorithm and portfolio selection approaches. It is built on the claspfolio2 framework and employs a highly parametric framework for algorithm selection. AutoFolio is a general-purpose system, incorporating techniques from systems like 3S, ISAC, SATzilla, SNAPP as well as multiclass classification, pairwise classification, and various regression methods. Unlike many other systems, AutoFolio is not specific to any particular problem domain, aiming instead to harness the strengths of each algorithm selection approach it includes. However, it does not focus on parallel portfolios.

A recent sequential portfolio design is MedleySolver (Pimpalkhare et al. 2021), which is designed to solve satisfiability modulo theories (SMT) problems. In this method, multiple solvers are selected based on the multi-armed bandit (MAB) framework and run sequentially. MAB models a situation where an agent must choose between multiple options, balancing the trade-off between exploring new solvers and exploiting the best-known solvers for the highest reward. MedleySolver essentially provides an ordering for a sequence of SMT solvers and splits available time between solvers.

In the domain of parallel portfolio solvers, the Plain Parallel Portfolio (PPP) approach involves running multiple solvers on same instance in parallel, leveraging multi-core shared memory architectures (Aigner et al. 2013). In this approach, sequential solvers are treated as "black boxes", they do not share any information with each other, and as soon as any solver resolves the instance, the entire process stops (Aigner et al. 2013; M. Lindauer, Hoos, and Hutter 2015). Some PPP methods are static, where the sub-portfolios are selected to run on all problems and are not instance-based. Notable examples of static PPP solvers in the SAT domain include ManySAT (Hamadi, Jabbour, and Sais 2009), ppfolio (Roussel 2012), pfolioUZK (Wotzlaw et al. 2012), with ppfolio excelling in the 2011 SAT Competition. The PPP approach typically selects the top solvers that consistently outperform others on average. This approach has also been used in methods such as SATZilla, where a presolving phase, as mentioned above, is employed (Xu et al. 2008). However, this strategy often overlooks solvers that, while less dominant overall, may significantly outperform the top solvers in specific instances.

To address these limitations, more innovative approaches, such as P3S (Malitsky et al. 2012), have been introduced. P3S is a parallel version of 3S (Kadioglu et al. 2011) and participated in the SAT Challenge 2012. It uses the same method as 3S but operates in parallel. ASPEED (H. H. Hoos et al. 2015), a static ASP technique, defines a fixed schedule for multiple solvers to run in parallel, solving this scheduling problem using answer set programming. Sunny-cp2 (Amadini, Gabbrielli, and Mauro 2015) is another approach, a parallel version of Sunny-cp, designed for the parallel execution of constraint satisfaction and constraint optimization solvers. We will discuss these approaches in more detail in Chapter 4, where we introduce another approach in this field that surpasses the options currently available in the literature.

ArvandHerd (Valenzano et al. 2012) is another example of parallel portfolio design, but in planning problems. This method won tracks in the 2011 Planning Competition. The portfolio includes multiple configurations of two different planners. Typically, on three cores, two cores are dedicated to running multiple configurations of the Arvand solver, while one core is allocated to LAMA-2008 solver configurations. The process starts with one configuration, and if that configuration exhausts the memory, the planner restarts with another configuration.

Although there have been many studies on plain portfolio designs, no plain portfolio solvers have been accepted into the SAT competition since 2020 (Froleyks et al. 2021). Prior to 2020, and since the SAT Competition 2016, plain portfolio solvers had the opportunity to enter the No Limit track where submitting the source code was not required. Before 2016, they were accepted as regular solvers, which is why we only have gold medalists and winning plain portfolio approaches related to competitions before 2016. Cooperative portfolio solvers, on the other hand, have been accepted in all tracks, especially in the parallel track of the SAT competition. In this portfolio approach, solvers can communicate, share learned clauses, or split search spaces to improve efficiency and performance.

For example, PRS-parallel, a parallel portfolio solver capable of running in distributed environments, supports clause sharing and won the parallel track of the 2023 SAT Competition (Tomas Balyo et al. 2023). PaKis, the parallel portfolio solver that won the parallel track of the 2021 SAT Competition, relies on different configurations of the Kissat solver but does not support information sharing between solvers (Tomas Balyo et al. 2021). There is a vast body of literature on cooperative portfolios, which is beyond the scope of this chapter.

Aigner, Martin, Armin Biere, Christoph M. Kirsch, Aina Niemetz, and Mathias Preiner. 2013. “Analysis of Portfolio-Style Parallel SAT Solving on Current Multi-Core Architectures.” In POS-13. Fourth Pragmatics of SAT Workshop, a Workshop of the SAT 2013 Conference, July 7, 2013, Helsinki, Finland, edited by Daniel Le Berre, 29:28–40. EPiC Series in Computing. EasyChair. https://doi.org/10.29007/73N4.
Ali, Shawkat, and Kate A Smith. 2006. “On Learning Algorithm Selection for Classification.” Applied Soft Computing 6 (2): 119–38.
Amadini, Roberto, Maurizio Gabbrielli, and Jacopo Mauro. 2014. SUNNY: A Lazy Portfolio Approach for Constraint Solving.” Theory Pract. Log. Program. 14 (4-5): 509–24. https://doi.org/10.1017/S1471068414000179.
———. 2015. A Multicore Tool for Constraint Solving.” In Proceedings of the 24th International Conference on Artificial Intelligence, 232–38.
———. 2018. “SUNNY-CP and the MiniZinc Challenge.” Theory and Practice of Logic Programming 18 (1): 81–96. https://doi.org/10.1017/S1471068417000205.
Arbelaez, Alejandro, Youssef Hamadi, and Michele Sebag. 2009. Online heuristic selection in constraint programming.”
Balyo, Tomas, Nils Froleyks, Marijn Heule, Markus Iser, Matti Järvisalo, and Martin Suda, eds. 2021. Proceedings of SAT Competition 2021: Solver and Benchmark Descriptions. Anthology or special issue. Department of Computer Science Report Series b. Department of Computer Science Report Series B. http://hdl.handle.net/10138/333647.
Balyo, Tomas, Marijn Heule, Markus Iser, Matti Järvisalo, and Martin Suda, eds. 2023. Proceedings of SAT Competition 2023: Solver, Benchmark and Proof Checker Descriptions. Anthology or special issue. Department of Computer Science, Helsinki Institute for Information Technology, Constraint Reasoning; Optimization research group / Matti Järvisalo.
Balyo, Tomáš, Marijn J H Heule, and Matti Järvisalo. 2017. SAT Competition 2017 Solver and Benchmark Descriptions.” Proceedings of SAT COMPETITION 2017, 14–15.
Batista dos Santos, Vânia, and Luiz Henrique de Campos Merschmann. 2020. “Metalearning Applied to Multi-Label Text Classification.” In Proceedings of the XVI Brazilian Symposium on Information Systems. SBSI ’20. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3411564.3411646.
Biere, Armin. 2012. Lingeling and friends entering the SAT challenge 2012.” Department of Computer Science Series of Publications B, January, 33–34.
———. 2013. Lingeling, Plingeling and Treengeling Entering the SAT Competition 2013.” In. https://api.semanticscholar.org/CorpusID:972178.
Biere, Armin, Marijn Heule, Hans van Maaren, and Toby Walsh. 2009. “Conflict-Driven Clause Learning Sat Solvers.” Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, 131–53.
———, eds. 2021. Handbook of Satisfiability - Second Edition. Vol. 336. Frontiers in Artificial Intelligence and Applications. IOS Press. https://doi.org/10.3233/FAIA336.
Bischl, Bernd, Pascal Kerschke, Lars Kotthoff, Marius Lindauer, Yuri Malitsky, Alexandre Fréchette, Holger Hoos, et al. 2016. ASlib: A Benchmark Library for Algorithm Selection.” Artificial Intelligence 237: 41–58.
Brazdil, P., J. N. van Rijn, C. Soares, and J. Vanschoren. 2022. Metalearning: Applications to Automated Machine Learning and Data Mining. Cognitive Technologies. Springer International Publishing. https://books.google.com/books?id=zDcOzgEACAAJ.
Brighton, Henry, and Chris Mellish. 2002. “Advances in Instance Selection for Instance-Based Learning Algorithms.” Data Mining and Knowledge Discovery 6: 153–72.
Carchrae, Tom, and J. Christopher Beck. 2005. Applying Machine Learning to Low-Knowledge Control of Optimization Algorithms.” Computational Intelligence 21. https://api.semanticscholar.org/CorpusID:7953876.
Cárdenas-Montes, Miguel. 2016. Evaluating the Difficulty of Instances of the Travelling Salesman Problem in the Nearby of the Optimal Solution Based on Random Walk Exploration.” In Hybrid Artificial Intelligent Systems, edited by Francisco Martínez-Álvarez, Alicia Troncoso, Héctor Quintián, and Emilio Corchado, 299–310. Cham: Springer International Publishing.
Cenamor, Isabel, Tomás De La Rosa, and Fernando Fernández. 2013. “Learning Predictive Models to Configure Planning Portfolios.” In Proceedings of the 4th Workshop on Planning and Learning (ICAPS-PAL 2013), 14–22. Citeseer.
———. 2016. The IBaCoP planning system: instance-based configured portfolios.” J. Artif. Int. Res. 56 (1): 657–91.
Cenamor, Isabel, Tomás de la Rosa, Fernando Fernández, et al. 2014. IBACOP and IBACOP2 planner.” IPC 2014 Planner Abstracts, 35–38.
Ciorba, Florina M., Ali Mohammed, Jonas H. Müller Korndörfer, and Ahmed Eleliemy. 2023. Automated Scheduling Algorithm Selection in OpenMP.” In 2023 22nd International Symposium on Parallel and Distributed Computing (ISPDC), 106–9. https://doi.org/10.1109/ISPDC59212.2023.00025.
Collautti, Marco, Yuri Malitsky, Deepak Mehta, and Barry O’Sullivan. 2013. SNNAP: Solver-based nearest neighbor for algorithm portfolios.” In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, 435–50. Springer.
Collevati, M., Agostino Dovier, and A. Formisano. 2022. GPU Parallelism for SAT Solving Heuristics.” CEUR-WS.
Davis, John D., Zhangxi Tan, Fang Yu, and Lintao Zhang. 2008. “A Practical Reconfigurable Hardware Accelerator for Boolean Satisfiability Solvers.” In 2008 45th ACM/IEEE Design Automation Conference, 780–85. https://doi.org/10.1145/1391469.1391669.
Davis, Martin, George Logemann, and Donald Loveland. 1962. “A Machine Program for Theorem-Proving.” Commun. ACM 5 (7): 394–97. https://doi.org/10.1145/368273.368557.
Ewald, Roland, Adelinde M. Uhrmacher, and Kaustav Saha. 2010. “Data Mining for Simulation Algorithm Selection.” In. ICST. https://doi.org/10.4108/ICST.SIMUTOOLS2009.5659.
Fawcett, Chris, Mauro Vallati, Frank Hutter, Jörg Hoffmann, Holger Hoos, and Kevin Leyton-Brown. 2014. Improved Features for Runtime Prediction of Domain-Independent Planners.” Proceedings of the International Conference on Automated Planning and Scheduling 24 (1): 355–59. https://doi.org/10.1609/icaps.v24i1.13680.
Froleyks, Nils, Marijn Heule, Markus Iser, Matti Järvisalo, and Martin Suda. 2021. SAT Competition 2020.” Artificial Intelligence 301: 103572. https://doi.org/https://doi.org/10.1016/j.artint.2021.103572.
Gebser, Martin, Roland Kaminski, Benjamin Kaufmann, Torsten Schaub, Marius Thomas Schneider, and Stefan Ziller. 2011. A Portfolio Solver for Answer Set Programming: Preliminary Report.” In Logic Programming and Nonmonotonic Reasoning, edited by James P. Delgrande and Wolfgang Faber, 352–57. Berlin, Heidelberg: Springer Berlin Heidelberg.
GEBSER, MARTIN, MARCO MARATEA, and FRANCESCO RICCA. 2019. The Seventh Answer Set Programming Competition: Design and Results.” Theory and Practice of Logic Programming 20 (2): 176–204. https://doi.org/10.1017/s1471068419000061.
Gil, Luı́s, Paulo F. Flores, and Luı́s Miguel Silveira. 2009. PMSat: a parallel version of MiniSAT.” J. Satisf. Boolean Model. Comput. 6 (1-3): 71–98. https://doi.org/10.3233/SAT190063.
Gomes, Carla Pedro, and Bart Selman. 1997. “Algorithm Portfolio Design: Theory Vs. Practice.” ArXiv abs/1302.1541. https://api.semanticscholar.org/CorpusID:8512615.
Gomes, Carla, and Bart Selman. 2001. “Algorithm Portfolios.” Artificial Intelligence 126: 43–62.
Gonard, François, Marc Schoenauer, and Michèle Sebag. 2019. Algorithm Selector and Prescheduler in the ICON Challenge.” In Bioinspired Heuristics for Optimization, 203–19. Springer International Publishing.
Gulati, Kanupriya, Suganth Paul, Sunil P. Khatri, Srinivas Patil, and Abhijit Jas. 2009. FPGA-based hardware acceleration for Boolean satisfiability.” ACM Trans. Des. Autom. Electron. Syst. 14 (2). https://doi.org/10.1145/1497561.1497576.
Guo, Haipeng, and William H. Hsu. 2003. “Algorithm Selection for Sorting and Probabilistic Inference: A Machine Learning-Based Approach.” PhD thesis, USA: Kansas State University.
———. 2005. “A Learning-Based Algorithm Selection Meta-Reasoner for the Real-Time MPE Problem.” In AI 2004: Advances in Artificial Intelligence, edited by Geoffrey I. Webb and Xinghuo Yu, 307–18. Berlin, Heidelberg: Springer. https://doi.org/10.1007/978-3-540-30549-1_28.
Hamadi, Youssef, Said Jabbour, and Lakhdar Sais. 2009. ManySAT: a Parallel SAT Solver.” Journal on Satisfiability, Boolean Modeling and Computation. https://doi.org/10.3233/sat190070.
Hölldobler, Steffen, Norbert Manthey, and Ari Saptawijaya. 2010. Improving Resource-Unaware SAT Solvers.” In Logic for Programming, Artificial Intelligence, and Reasoning, edited by Christian G. Fermüller and Andrei Voronkov, 519–34. Berlin, Heidelberg: Springer Berlin Heidelberg.
Hölldobler, Steffen, Norbert Manthey, Van Hau Nguyen, Julian Stecklina, and Peter Steinke. 2011. A short overview on modern parallel SAT-solvers.” ICACSIS 2011 - 2011 International Conference on Advanced Computer Science and Information Systems, Proceedings, no. January: 201–6.
Hoos, Holger H., Roland Kaminski, Marius Thomas Lindauer, and Torsten Schaub. 2015. aspeed: Solver scheduling via answer set programming.” TPLP 15 (1): 117–42.
Hoos, Holger, Marius Lindauer, and Torsten Schaub. 2014. “Claspfolio 2: Advances in Algorithm Selection for Answer Set Programming.” https://arxiv.org/abs/1405.1520.
Howe, Adele E, Eric Dahlman, Christopher Hansen, Michael Scheetz, and Anneliese Von Mayrhauser. 2000. “Exploiting Competitive Planner Performance.” In Recent Advances in AI Planning: 5th European Conference on Planning, ECP’99, Durham, UK, September 8-10, 1999. Proceedings 5, 62–72. Springer.
Huberman, Bernardo A., Rajan M. Lukose, and Tad Hogg. 1997. An economics approach to hard computational problems.” Science 275 (5296): 51–54. https://doi.org/10.1126/science.275.5296.51.
Hutter, Frank, Lars Kotthoff, and Joaquin Vanschoren, eds. 2019. Automated Machine Learning - Methods, Systems, Challenges. Springer.
Hutter, Frank, Manuel López-Ibáñez, Chris Fawcett, Marius Lindauer, Holger H. Hoos, Kevin Leyton-Brown, and Thomas Stützle. 2014. AClib: A Benchmark Library for Algorithm Configuration.” In Learning and Intelligent Optimization, edited by Panos M. Pardalos, Mauricio G. C. Resende, Chrysafis Vogiatzis, and Jose L. Walteros, 36–40. Cham: Springer International Publishing.
Hutter, Frank, Lin Xu, Holger H. Hoos, and Kevin Leyton-Brown. 2014. “Algorithm Runtime Prediction: Methods & Evaluation.” Artificial Intelligence 206: 79–111. https://doi.org/https://doi.org/10.1016/j.artint.2013.10.003.
Hyvarinen, Antti E. J., and Christoph M. Wintersteiger. 2012. Approaches for Multi-Core Propagation in Clause Learning Satisfiability Solvers.” MSR-TR-2012-47. https://www.microsoft.com/en-us/research/publication/approaches-for-multi-core-propagation-in-clause-learning-satisfiability-solvers/.
Hyvärinen, Antti E. J., Tommi Junttila, and Ilkka Niemelä. 2008. “Strategies for Solving SAT in Grids by Randomized Search.” In Intelligent Computer Mathematics, edited by Serge Autexier, John Campbell, Julio Rubio, Volker Sorge, Masakazu Suzuki, and Freek Wiedijk, 125–40. Berlin, Heidelberg: Springer Berlin Heidelberg.
———. 2010. Partitioning SAT Instances for Distributed Solving.” In Logic for Programming, Artificial Intelligence, and Reasoning, edited by Christian G. Fermüller and Andrei Voronkov, 372–86. Berlin, Heidelberg: Springer Berlin Heidelberg.
Kadioglu, Serdar, Yuri Malitsky, Ashish Sabharwal, Horst Samulowitz, and Meinolf Sellmann. 2011. Algorithm selection and scheduling.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6876 LNCS: 454–69. https://doi.org/10.1007/978-3-642-23786-7_35.
Kadioglu, Serdar, Yuri Malitsky, Meinolf Sellmann, and Kevin Tierney. 2010. “ISAC–Instance-Specific Algorithm Configuration.” In ECAI 2010, 751–56. IOS Press.
Kadıoglu, Serdar, Yuri Malitsky, and Meinolf Sellmann. n.d. How to Win Gold at a SAT Competition Without Writing a SAT Solver.”
Karp, Richard M. 1972. “Reducibility Among Combinatorial Problems.” In Complexity of Computer Computations: Proceedings of a Symposium on the Complexity of Computer Computations, Held March 20–22, 1972, at the IBM Thomas j. Watson Research Center, Yorktown Heights, New York, and Sponsored by the Office of Naval Research, Mathematics Program, IBM World Trade Corporation, and the IBM Research Mathematical Sciences Department, edited by Raymond E. Miller, James W. Thatcher, and Jean D. Bohlinger, 85–103. Boston, MA: Springer US. https://doi.org/10.1007/978-1-4684-2001-2_9.
Kerschke, Pascal, Holger H. Hoos, Frank Neumann, and Heike Trautmann. 2019. Automated Algorithm Selection: Survey and Perspectives.” Evolutionary Computation 27 (1): 3–45. https://doi.org/10.1162/evco_a_00242.
Kerschke, Pascal, Lars Kotthoff, Jakob Bossek, Holger H. Hoos, and Heike Trautmann. 2018. Leveraging TSP Solver Complementarity through Machine Learning.” Evolutionary Computation 26 (4): 597–620. https://doi.org/10.1162/evco_a_00215.
Kocamaz, Uğur Erkin. 2013. “Increasing the Efficiency of Quicksort Using a Neural Network Based Algorithm Selection Model.” Information Sciences 229: 94–105. https://doi.org/https://doi.org/10.1016/j.ins.2012.11.014.
Kotthoff, Lars. 2014. Algorithm selection for combinatorial search problems: A survey.” AI Magazine 35 (3): 48–69.
Kuş, Erdem, Özgür Akgün, Nguyen Dang, and Ian Miguel. 2024. Frugal Algorithm Selection.” Cornell University. https://doi.org/ 10.48550/arxiv.2405.11059 .
Le Frioux, Ludovic. 2019. Towards more efficient parallel SAT solving.” Theses, Sorbonne Université. https://theses.hal.science/tel-03030122.
Leeson, Will, and Matthew B. Dwyer. 2024. Algorithm Selection for Software Verification Using Graph Neural Networks.” ACM Trans. Softw. Eng. Methodol. 33 (3). https://doi.org/10.1145/3637225.
Leyton-Brown, Kevin, Eugene Nudelman, Galen Andrew, Jim McFadden, and Yoav Shoham. 2003. A portfolio approach to algorithm selection.” In Proceedings of the 18th International Joint Conference on Artificial Intelligence, 1542–43. IJCAI’03. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Leyton-Brown, Kevin, Eugene Nudelman, and Yoav Shoham. 2002. Learning the Empirical Hardness of Optimization Problems: The Case of Combinatorial Auctions.” In Principles and Practice of Constraint Programming - CP 2002, edited by Pascal Van Hentenryck, 556–72. Berlin, Heidelberg: Springer Berlin Heidelberg.
Lieder, Falk, Dillon Plunkett, Jessica B Hamrick, Stuart J Russell, Nicholas Hay, and Tom Griffiths. 2014. “Algorithm Selection by Rational Metareasoning as a Model of Human Strategy Selection.” In Advances in Neural Information Processing Systems, edited by Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger. Vol. 27. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2014/file/7fb8ceb3bd59c7956b1df66729296a4c-Paper.pdf.
Lindauer, Marius, Rolf-David Bergdoll, and Frank Hutter. 2016. “An Empirical Study of Per-Instance Algorithm Scheduling.” In Proceedings of the Tenth International Conference on Learning and Intelligent Optimization, LION’16, in: Lecture Notes in Computer Science, 253–59. Springer; Springer.
Lindauer, Marius, Holger H Hoos, Frank Hutter, and Torsten Schaub. 2015. “Autofolio: An Automatically Configured Algorithm Selector.” Journal of Artificial Intelligence Research 53: 745–78.
Lindauer, Marius, Holger Hoos, and Frank Hutter. 2015. From sequential algorithm selection to parallel portfolio selection.” In International Conference on Learning and Intelligent Optimization, 1–16. Springer.
Lindauer, Marius, Jan N. van Rijn, and Lars Kotthoff. 2019. The algorithm selection competitions 2015 and 2017.” Artificial Intelligence 272: 86–100. https://doi.org/https://doi.org/10.1016/j.artint.2018.10.004.
Lindauer, T Marius. 2014. Algorithm selection, scheduling and configuration of Boolean constraint solvers.” PhD thesis, Universität Potsdam.
Lukac, Martin, and Michitaka Kameyama. 2023. Verification Based Algorithm Selection.” In 2023 International Conference on Information and Digital Technologies (IDT), 25–30. https://doi.org/10.1109/IDT59031.2023.10194439.
Malitsky, Yuri, Barry O’Sullivan, Alessandro Previti, and Joao Marques-Silva. 2014. A Portfolio Approach to Enumerating Minimal Correction Subsets for Satisfiability Problems.” In, 368–76. Springer, Cham. https://doi.org/ 10.1007/978-3-319-07046-9_26 .
Malitsky, Yuri, Ashish Sabharwal, Horst Samulowitz, and Meinolf Sellmann. 2012. Parallel SAT Solver Selection and Scheduling.” In Proceedings of the 18th International Conference on Principles and Practice of Constraint Programming - Volume 7514, 512–26. Springer-Verlag.
———. 2013. Algorithm portfolios based on cost-sensitive hierarchical clustering.” In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, 608–14. IJCAI ’13. Beijing, China: AAAI Press.
Manthey, Norbert. 2011. Parallel SAT solving-using more cores.” Pragmatics of SAT (POS’11).
Meyer, Quirin, Fabian Schönfeld, Marc Stamminger, and Rolf Wanka. 2010. 3-SAT on CUDA: Towards a massively parallel SAT solver.” In 2010 International Conference on High Performance Computing & Simulation, 306–13. https://doi.org/10.1109/HPCS.2010.5547116.
Musliu, Nysret, and Martin Schwengerer. 2013. Algorithm Selection for the Graph Coloring Problem.” In Learning and Intelligent Optimization, edited by Giuseppe Nicosia and Panos Pardalos, 389–403. Berlin, Heidelberg: Springer Berlin Heidelberg.
Nichols, Daniel, Harshitha Menon, Todd Gamblin, and Abhinav Bhatele. 2023. A Probabilistic Approach To Selecting Build Configurations in Package Managers,” November. https://doi.org/10.2172/2223030.
Nudelman, Eugene, Kevin Leyton-Brown, Holger H. Hoos, Alex Devkar, and Yoav Shoham. n.d. “Understanding Random SAT: Beyond the Clauses-to-Variables Ratio.” In Principles and Practice of Constraint Programming – CP 2004, edited by Mark Wallace, 438–52. Berlin, Heidelberg: Springer Berlin Heidelberg.
O’Mahony, Eoin, Emmanuel Hebrard, Alan Holland, Conor Nugent, and Barry O’Sullivan. 2008. “Using Case-Based Reasoning in an Algorithm Portfolio for Constraint Solving.” In Irish Conference on Artificial Intelligence and Cognitive Science, 210–16. Proceedings of the 19th Irish Conference on Artificial Intelligence; Cognitive Science.
Osama, Muhammad, Anton Wijs, and Armin Biere. 2021. SAT Solving with GPU Accelerated Inprocessing.” In Tools and Algorithms for the Construction and Analysis of Systems, edited by Jan Friso Groote and Kim Guldstrand Larsen, 133–51. Cham: Springer International Publishing.
———. 2023. Certified SAT solving with GPU accelerated inprocessing.” Form. Methods Syst. Des. 62 (1–3): 79–118. https://doi.org/10.1007/s10703-023-00432-z.
Pearl, Judea. 1984. Heuristics: Intelligent Search Strategies for Computer Problem Solving. USA: Addison-Wesley Longman Publishing Co., Inc.
Pimpalkhare, Nikhil, Federico Mora, Elizabeth Polgreen, and Sanjit A. Seshia. 2021. MedleySolver: Online SMT Algorithm Selection.” In Theory and Applications of Satisfiability Testing – SAT 2021, edited by Chu-Min Li and Felip Manyà, 453–70. Cham: Springer International Publishing.
Pulatov, Damir, Marie Anastacio, Lars Kotthoff, and Holger Hoos. 2022. Opening the Black Box: Automated Software Analysis for Algorithm Selection.” In Proceedings of the First International Conference on Automated Machine Learning, 188:6/1–18. PMLR. https://proceedings.mlr.press/v188/pulatov22a.html.
Pulina, Luca, and Martina Seidl. 2019. “The 2016 and 2017 QBF Solvers Evaluations (QBFEVAL’16 and QBFEVAL’17).” Artificial Intelligence 274: 224–48. https://doi.org/https://doi.org/10.1016/j.artint.2019.04.002.
Redekopp, M., and A. Dandalis. 2000. A Parallel Pipelined SAT Solver for FPGA’s.” In Field-Programmable Logic and Applications: The Roadmap to Reconfigurable Computing, edited by Reiner W. Hartenstein and Herbert Grünbacher, 462–68. Berlin, Heidelberg: Springer Berlin Heidelberg.
Rice, John R. 1976. The Algorithm Selection Problem.” Advances in Computers. https://doi.org/10.1016/S0065-2458(08)60520-3.
Richter, Cedric, Eyke Hüllermeier, Marie-Christine Jakobs, and Heike Wehrheim. 2020. “Algorithm Selection for Software Validation Based on Graph Kernels.” Automated Software Engg. 27 (1–2): 153–86. https://doi.org/10.1007/s10515-020-00270-x.
Roberts, Mark, Adele E Howe, Brandon Wilson, and Marie desJardins. 2008. What Makes Planners Predictable? In ICAPS, 288–95.
Roussel, Olivier. 2012. “Description of Ppfolio (2011).” Proc. SAT Challenge, 46.
Shavit, Hadar, and Holger H. Hoos. 2024. Revisiting SATZilla Features in 2024.” In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024), edited by Supratik Chakraborty and Jie-Hong Roland Jiang, 305:27:1–26. Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: Schloss Dagstuhl – Leibniz-Zentrum für Informatik. https://doi.org/10.4230/LIPIcs.SAT.2024.27.
Singer, Daniel, and Alain Vagner. 2006. Parallel Resolution of the Satisfiability Problem (SAT) with OpenMP and MPI.” In Parallel Processing and Applied Mathematics, edited by Roman Wyrzykowski, Jack Dongarra, Norbert Meyer, and Jerzy Waśniewski, 380–88. Berlin, Heidelberg: Springer Berlin Heidelberg.
Sinz, Carsten, Wolfgang Blochinger, and Wolfgang Küchlin. 2001. PaSAT — Parallel SAT-Checking with Lemma Exchange: Implementation and Applications.” Electronic Notes in Discrete Mathematics 9: 205–16. https://doi.org/https://doi.org/10.1016/S1571-0653(04)00323-3.
Sitaru, Ioana, and Madalina Raschip. 2022. Algorithm Selection for Combinatorial Packing Problems.” In 2022 IEEE Congress on Evolutionary Computation (CEC), 1–8. Padua, Italy: IEEE Press. https://doi.org/10.1109/CEC55065.2022.9870417.
Smith-Miles, Kate, and Leo Lopes. 2012. “Measuring Instance Difficulty for Combinatorial Optimization Problems.” Computers & Operations Research 39 (5): 875–89. https://doi.org/https://doi.org/10.1016/j.cor.2011.07.006.
Somshubra Majumdar, Kunal Kukreja, Ishaan Jain. 2016. AdaSort: Adaptive Sorting using Machine Learning.” International Journal of Computer Applications 145 (12): 12–17. https://doi.org/ 10.5120/ijca2016910726 .
Stuckey, Peter J., Thibaut Feydy, Andreas Schutt, Guido Tack, and Julien Fischer. 2014. The MiniZinc Challenge 2008–2013.” AI Magazine 35 (2): 55–60. https://doi.org/10.1609/aimag.v35i2.2539.
Taitler, Ayal, Ron Alford, Joan Espasa, Gregor Behnke, Daniel Fišer, Michael Gimelfarb, Florian Pommerening, et al. 2024. “The 2023 International Planning Competition.” AI Magazine, 1–17.
Thornton, Chris, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2013. Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms.” In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 847–55. KDD ’13. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2487575.2487629.
Valenzano, Richard, Hootan Nakhost, Martin Müller, Jonathan Schaeffer, and Nathan Sturtevant. 2012. “ArvandHerd: Parallel Planning with a Portfolio.” In Proceedings of the 20th European Conference on Artificial Intelligence, 786–91. ECAI’12. NLD: IOS Press.
Vanschoren, Joaquin. 2019. “Meta-Learning.” In Automated Machine Learning: Methods, Systems, Challenges, edited by Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, 35–61. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-05318-5_2.
Visentin, Andrea, Aodh Ó Gallchóir, Jens Kärcher, and Herbert Meyr. 2024. Explainable Algorithm Selection for the Capacitated Lot Sizing Problem.” In Integration of Constraint Programming, Artificial Intelligence, and Operations Research, edited by Bistra Dilkina, 243–52. Cham: Springer Nature Switzerland.
Wang, Qiang, Jiawei Jiang, Yongxin Zhao, Weipeng Cao, Chunjiang Wang, and Shengdong Li. 2021. Algorithm selection for software verification based on adversarial LSTM.” In 2021 7th IEEE Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS), 87–92. https://doi.org/10.1109/BigDataSecurityHPSCIDS52275.2021.00026.
Williamson, David P., and David B. Shmoys. 2011. The Design of Approximation Algorithms. Cambridge University Press.
Woeginger, Gerhard J. 2002. “Exact Algorithms for NP-hard Problems: A Survey.” In, 185–207. Springer, Berlin, Heidelberg. https://doi.org/ 10.1007/3-540-36478-1_17 .
Wolpert, D. H., and W. G. Macready. 1997. “No Free Lunch Theorems for Optimization.” IEEE Transactions on Evolutionary Computation 1 (1): 67–82. https://doi.org/10.1109/4235.585893.
Wotzlaw, Andreas, Alexander van der Grinten, Ewald Speckenmeyer, and Stefan Porschen. 2012. pfolioUZK: Solver description.” Balint Et Al.(Balint Et Al., 2012a), 45.
Xu, Lin, Holger H. Hoos, and Kevin Leyton-Brown. 2010. Hydra: Automatically Configuring Algorithms for Portfolio-Based Selection.” In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, 210–16. AAAI’10 1. AAAI Press.
Xu, Lin, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2008. SATzilla: Portfolio-Based Algorithm Selection for SAT.” J. Artif. Int. Res. 32 (1): 565–606.
Xu, Lin, Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. 2011. Hydra-MIP: Automated algorithm configuration and selection for mixed integer programming.” In Proceedings of the 18th RCRA Workshop, 16–30.
Xu, Lin, Frank Hutter, Holger Hoos, and Kevin Leyton-Brown. 2012. Evaluating Component Solver Contributions to Portfolio-Based Algorithm Selectors.” In Theory and Applications of Satisfiability Testing – SAT 2012, edited by Alessandro Cimatti and Roberto Sebastiani, 228–41. Berlin, Heidelberg: Springer Berlin Heidelberg.
ZHANG, HANTAO, MARIA PAOLA BONACINA, and JIEH HSIANG. 1996. PSATO: a Distributed Propositional Prover and its Application to Quasigroup Problems.” Journal of Symbolic Computation 21 (4): 543–60. https://doi.org/https://doi.org/10.1006/jsco.1996.0030.
Zhong, Peixin, M. Martonosi, P. Ashar, and S. Malik. 1998. Accelerating Boolean satisfiability with configurable hardware.” In Proceedings. IEEE Symposium on FPGAs for Custom Computing Machines (Cat. No.98TB100251), 186–95. https://doi.org/10.1109/FPGA.1998.707896.