API

Runner

Post Processing

Metrics

class plannerbenchmark.postProcessing.metrics.Metric(name: str, measNames: list, params: dict)

Bases: abc.ABC

Abstract metric to assess the performance of the motion planner.

_name

Name of the metric.

Type

str

_params

Additional information and data is passed through the dictonary.

Type

dict

_measNames

List of keys that are needed from the results.

Type

list

class plannerbenchmark.postProcessing.metrics.SolverTimesMetric(name: str, measNames: list, params: dict)

Bases: plannerbenchmark.postProcessing.metrics.Metric

Metric to compute the average solver time of the motion planner.

Requires the interval at which the planner was invoked. Computes the solvertime in miliseconds.

class plannerbenchmark.postProcessing.metrics.TimeToReachGoalMetric(name: str, measNames: list, params: dict)

Bases: plannerbenchmark.postProcessing.metrics.Metric

Metric to compute the time it took to reach the goal.

Requires the threshold, des_distance.

class plannerbenchmark.postProcessing.metrics.IntegratedErrorMetric(name: str, measNames: list, params: dict)

Bases: plannerbenchmark.postProcessing.metrics.Metric

Metric to compute the integrated deviation error from reference trajectory.

Requires the threshold, des_distance. This metric computes the averaged deviation error from the first time the threshold was reached.

class plannerbenchmark.postProcessing.metrics.ClearanceMetric(name: str, measNames: list, params: dict)

Bases: plannerbenchmark.postProcessing.metrics.Metric

Metric to compute the minimum clearance from any obstacle.

Requires the dimension of obstacles, m, the dimension of the configuration space, n, all obstacles present in the scenario, obstacles and the link inflation r_body. Based on the forward kinematics, the distance between all links and all obstacles is computed. The clearance is the minumum of all those values. The output of this metric provides information about the minimum distance between every link and every obstacle.

class plannerbenchmark.postProcessing.metrics.DynamicClearanceMetric(name: str, measNames: list, params: dict)

Bases: plannerbenchmark.postProcessing.metrics.Metric

Metric to compute the clearance with dynamic obstacles.

Requires the dimension of obstacles, m, the dimension of the configuration space, n, the inflation radius of the robot links, r_body and the radius of the dynamic obstacles, r_obsts. Minimum distances between all robot links and all obstacles are computed and the returned.

class plannerbenchmark.postProcessing.metrics.PathLengthMetric(name: str, measNames: list, params: dict)

Bases: plannerbenchmark.postProcessing.metrics.Metric

Metric to compute the pathlength of the trajectory in workspace.

The path length is computed in the work space and not the configuration space.

class plannerbenchmark.postProcessing.metrics.SelfClearanceMetric(name: str, measNames: list, params: dict)

Bases: plannerbenchmark.postProcessing.metrics.Metric

Metric to compute clearance between different links on the robot.

Requires the dimension of obstacles, m, the dimension of the configuration space, n, the inflation radius of the hrobot links, r_body and the list of pairs that should be evaluated, pairs.

class plannerbenchmark.postProcessing.metrics.SuccessMetric(name: str, measNames: list, params: dict)

Bases: plannerbenchmark.postProcessing.metrics.Metric

Metric to compute if the experiment was successful.

Requires minimum clearance and information whether the goal was reached. Both can be computed using one of the above metrics. An experiment is considered sucessful if the goal was reached and the clearance was always positive.

Evaluation

class plannerbenchmark.postProcessing.caseEvaluation.CaseEvaluation(folder: str, recycle: bool = False)

Bases: object

Evaluation class for a single case

decodeFolderName() None

Decodes the folder name into planner and timeStamp using regex.

evaluateMetrics() None

Evaluates all metrics specified for this experiment.

The evaulations of the metrics are stored in the self._kpis didctonary. Note that all metrics evaluations are dictonaries themselves with a required key short that summarizes the result of this metric.

evaluateSuccess() None

Evaluatios whether the planning problem was sucessfully solved.

A problem was sucessfully solved if the goal was reached and no collision occured during execution (minClearance > 0).

experiment() plannerbenchmark.generic.experiment.Experiment

Gets expermient instance of the experiment.

interval() int

Gets time interval of the planner in this experiment.

kpis(short: bool = False) dict

Gets key performance indicators.

Parameters

short (bool, optional) – Flag specifying whether only the kpi summaries should be returned. (by default the detailed description is returned)

plannerName() str

Gets planner name of the experiment as string.

process() None

Processing the experiment.

readData() None

Reads in results csv-file as dictonary.

readResults() None

Read results from previous evaluations.

setMetrics(metricNames: list) None

Sets the metrics for case evaluation.

Parameters

metricNames (list) – List of metrics that should be evaluated.

timeStamp() str

Gets time stamp of the experiment as string.

writeKpis() None

Writes kpis to postProcess.yaml file.

writeResults() None

Writes results to yaml files.

class plannerbenchmark.postProcessing.seriesEvaluation.SeriesEvaluation(folder: str, recycle: bool = False)

Bases: object

Evaluation class for a series

_folder

Full path to the series folder.

Type

str

_recycle

Flag that tell the evaluation if old evaluation can be recycled. (by default this is set to false)

Type

bool

filterKpis(kpiDict: dict) list

Transforms kpis to list and excludes success metric.

filterMetricNames() list

Filters metric names to exclude success metric name.

process() None

Performs post postprocessing for series of experiments.

For every experiment folder inside the series folder, a case evaluation is performed. Then, a summary of all cases is composed in this class.

success(plannerName: str, timeStamp: str) dict

Gets evaluation of success metric.

writeKpis() None

Writes kpis to postProcess.yaml file.

writeResultTables() None

Write result table to successTable.csv-file.

writeResults() None

Writes results to yaml files.

class plannerbenchmark.postProcessing.seriesComparison.SeriesComparison(folder: str, recycle: bool = False)

Bases: plannerbenchmark.postProcessing.seriesEvaluation.SeriesEvaluation

Series comparison between two planners.

compare() None

Compares the performance of two planners.

Two planners are compared by computing the ratio for all individual metrics for every experiment in the series.

filterKpis(kpiDict: dict) list

Transforms kpis to list and excludes success metric.

filterMetricNames() list

Filters metric names to exclude success metric name.

getCasesSolvedByBoth() list

Gets timestamps of all cases that were solved by both solvers.

Returns

List containing all time stamps as strings that were solved by both methods.

Return type

list of str

getPlannerNames() list

Gets planner names.

process() None

Process series, writes results and compares different planners.

readResults()

Reads results from previous evaluations.

success(plannerName: str, timeStamp: str) dict

Gets evaluation of success metric.

writeComparison() None

Writes comparison resultTable_comparison.csv-file.

writeKpis() None

Writes kpis to postProcess.yaml file.

writeResultTables() None

Write result table to successTable.csv-file.

writeResults() None

Writes results to yaml files.

Plotting

class plannerbenchmark.postProcessing.casePlotting.CasePlotting(folder: str)

Bases: object

Wrapper to direct to the correct gnuplot script for plotting the results of the experiment to an appropriate format.

plot() None

Call the correct gnuplot script based on the robot type.

The gnuplot scripts are called using subprocess.Popen to avoid additional libraries. Depending on the robot type, the gnuplot scripts take a different number of arguments. Output from the gnuplot scripts is passed to the subprocess.PIPE.

class plannerbenchmark.postProcessing.seriesPlotting.SeriesPlotting(folder: str, nbMetrics: int)

Bases: object

SeriesPlotting compares different planners on the same experiments.

getPlannerNames() list

Extracts the different planners present in the series.

plot() None

Calls the script to generate the series plots.

class plannerbenchmark.postProcessing.seriesComparisonPlotting.SeriesComparisonPlotting(folder: str, nbMetrics: int)

Bases: object

Plotting wrapper for series comparisons.

getPlannerNames() list

Gets planner names.

plot() None

Call the correct gnuplot script.

The gnuplot scripts are called using subprocess.Popen to avoid additional libraries. Depending on the robot type, the gnuplot scripts take a different number of arguments. Output from the gnuplot scripts is passed to the subprocess.PIPE.

Helpers

plannerbenchmark.postProcessing.helpers.createMetricsFromNames(names: str, experiment: plannerbenchmark.generic.experiment.Experiment, interval: int = 1) list

Create metrics from the names.

For every metric different information of the experiment is needed. This function extracts the right information of the experiment and the planner to form the metrics based on their names.

Parameters
  • names (str) – metric names

  • experiment (Experiment) – Experiment instance for which the metrics should be added.

  • interval (int) – Interval of the planner. This is needed for the solverTime metric. (by default it is set to 1, indicating that the planner was executed at every time step)

Returns

Returns a list of all metrics for which the name was specified.

Return type

list