Call for Benchmarks
Please note that the organizers are happy to provide advice on getting the benchmarks into the right format, and that we have tools that can help in fixing certain format related problems.
Main Track Benchmarks
For the main tracks, we invite submissions of collections of MaxSAT instances in the standard WCNF submission format. Note that the focus of MSE 2022 is on non-randomly generated MaxSAT instances.
Incremental Track Benchmarks
For the newly-established special track on incremental MaxSAT, we strongly encourage the community to support this new development by submitting benchmark applications. In contrast to main track submissions, a benchmark application in the incremental track constitutes a command-line program with a collection of input files. An instance of the benchmark application constitutes an execution of the program which makes multiple MaxSAT solver invocations through the IPAMIR interface on a given input file, the name of which is given as the single command-line argument to the program.
Your benchmark application must contain a single directory named after the application. Issuing make with the environment variable IPAMIRSOLVER in this directory must result in a successful build of the application using the MaxSAT solver located in ../../maxsat/$(IPAMIRSOLVER) which implements the IPAMIR interface as a static library. Please see the app/ipamirapp directory in the IPAMIR repository for an example application.
Submission Procedure
- A benchmark submission should consist of a single zip or gzipped tar package, containing the WCNF instance files (for main track benchmark submissions) or the benchmark application program and input files (for incremental track benchmark submissions), together with a description of the benchmarks.
- Please use appropriate file naming conventions. Ideally, each instance file name should contain a short descriptive part for the problem domain as well as the parameters used for generating the instance as applicable.
- The benchmark description description must be formatted in IEEE Proceedings style, and submitted as PDF. The description should include author information with affiliations, a description of the problem domain in question, a description of the parameters used for generating the instances, and the file name convention. References should be used as appropriate.
The benchmark descriptions will be posted on the MaxSAT Evaluation 2022 website. Furthermore, the organizers are considering publishing the collection of system and benchmark descriptions as a report under the report series of Department of Computer Science, University of Helsinki (with an ISSN number).
Please submit benchmarks by email to maxsatevaluation@gmail.com using the subject title "MSE22 benchmark submission" by June 7 at the latest.