MaxSAT Evaluation 2023
Affiliated with SAT 2023   ·     ·   Italy

Call for Benchmarks

Please note that the organizers are happy to provide advice on getting the benchmarks into the right format, and that we have tools that can help in fixing certain format related problems.

Main Track Benchmarks

For the main tracks, we invite submissions of collections of MaxSAT instances in the revised WCNF submission format. Note that the focus of MSE 2023 is on non-randomly generated MaxSAT instances.

Incremental Track Benchmarks

For the special track on incremental MaxSAT, we strongly encourage the community to submit benchmark applications. In contrast to main track submissions, a benchmark application in the incremental track constitutes a command-line program with a collection of input files. An instance of the benchmark application constitutes an execution of the program which makes multiple MaxSAT solver invocations through the IPAMIR interface on a given input file, the name of which is given as the single command-line argument to the program.

Your benchmark application must contain a single directory named after the application. Issuing make with the environment variable IPAMIRSOLVER in this directory must result in a successful build of the application using the MaxSAT solver located in ../../maxsat/$(IPAMIRSOLVER) which implements the IPAMIR interface as a static library. Please see the app/ipamirapp directory in the IPAMIR repository for an example application.

Submission Procedure

  • A benchmark submission should consist of a single zip or gzipped tar package, containing the WCNF instance files (for main track benchmark submissions) or the benchmark application program and input files (for incremental track benchmark submissions), together with a description of the benchmarks.
  • Please use appropriate file naming conventions. Ideally, each instance file name should contain a short descriptive part for the problem domain as well as the parameters used for generating the instance as applicable.
  • The benchmark description description must be formatted in IEEE Proceedings style, and submitted as PDF. The description should include author information with affiliations, a description of the problem domain in question, a description of the parameters used for generating the instances, and the file name convention. References should be used as appropriate.

The benchmark descriptions will be posted on the MaxSAT Evaluation 2023 website. Furthermore, the organizers are considering publishing the collection of system and benchmark descriptions as a report under the report series of Department of Computer Science, University of Helsinki (with an ISSN number).



Please submit benchmarks by email to maxsatevaluation@gmail.com using the subject title "MSE23 benchmark submission" by May 15 31 at the latest.