add_asym_connectivity_penalties()
, add_connectivity_penalties()
marxan_connectivity_data_to_matrix()
documentation so that examples
are standalone and do not affect the session by loading packages.marxan_boundary_data_to_matrix()
and
marxan_connectivity_data_to_matrix()
documentation so that examples
provides more information on how the functions work.marxan_boundary_data_to_matrix()
and
marxan_connectivity_data_to_matrix()
documentation so that examples
are standalone and do not affect the session by loading packages.add_max_utility_objective()
that caused the optimization
process to throw an error about problem infeasibility when using
feature data that contain negative values (#334). Thanks to @hannahmp
for bug report.presolve_check()
that would cause it to erroneously suggest that
many planning units don't have any feature data associated with them. This
bug was caused when the feature data contained relatively large, negative values.binary_stack()
that caused it to throw an error when working
with raster data containing zeros (#333).add_absolute_targets()
where it would not throw a warning to
the user know that a problem already had targets defined, and so adding
the new targets would override the existing targets defined for the problem.as.ZonesRaster
that resulted in an error when trying to
convert a SpatRaster
zones object (i.e., a zones
object with terra
package data) into Raster
zones object (i.e., a zones
object with
raster package data).write_problem()
needlessly printing messages about the
gurobi package not being installed when the function is trying to
automatically determine which solver to use (i.e., when using
solver = NULL
) and the package is not is available.branch_matrix()
where it would not automatically convert
object to the phylo
class in the ape package.add_asym_connectivity_penalties()
so that
it now specifies that asymmetric connectivity values are required when
symmetric values are incorrectly supplied (#339). Thanks to @DanWismer for
bug report.rlang::warn()
).compile()
so that it throws an error when using the expanded
version of a problem formulation with negative feature values. This
is because the expanded version of the problem formulations are not
compatible with negative feature values. Currently, the expanded
version of the problem formulation is only required when using
add_feature_contiguity_constraints()
.print()
and summarize()
not displaying correct text for
linear constraints (#330).problem()
objects. This
new default portfolio -- which can be manually specified using
add_default_portfolio()
-- involves simply generating a single solution.
The reason why this new default portfolio method was chosen was because
planning problems that contain insufficient data (e.g., feature and cost
data) to identify meaningful priorities can sometimes result in solutions
containing strange spatial artifacts (e.g., lines or bands of selected
planning units, see #205 and #268). Since the presence of these spatial
artifacts can indicate an under-specified problem and shuffling
optimization problems can suppress them, we have
decided to update the default portfolio so that it does not shuffle problems.
If users wish to prevent spatial artifacts from appearing in solutions, then
spatial penalties (e.g., add_boundary_penalties()
), spatial constraints
(e.g., add_neighbor_constraints()
), or shuffle portfolios
(e.g., add_shuffle_portfolio(number_solutions = 1)
) can be used.add_default_portfolio()
function for specifying the default
behavior for generating a solution (see Notice above for further details).solve()
so that it provides information on the optimality of
solutions (#323). For example, you might specify a 10% optimality gap
for the optimization process (e.g., using add_highs_solver(gap = 0.1)
), and
this might produce a solution that is at least 7% from optimality. The
resulting output from solve()
will now provide this information about
the solution (i.e., the 7% from optimality), and can be accessed
using the gap
attribute (e.g., attr(x, "gap")
, where x
is the output
from solve()
). Note that this information is currently only available when
using the Gurobi or HiGHS solvers.add_linear_constraints()
and add_linear_penalties()
that
resulted in an incorrect error message being shown (#324).add_shuffle_portfolio()
that prevented solvers from using a
pre-specified starting solution (per the start
parameter) correctly.
Please note that this bug did not result in incorrect solutions, it only
meant that any pre-specified starting solutions were not used properly.add_cplex_solver()
that caused solutions to not provide
runtime information for the optimization process.add_shuffle_portfolio()
so that optimization problems are
randomly shuffled when a single solution is requested. This update should
help prevent "strange" solutions that contain long horizontal lines/bands of
planning units (#205, #268).add_contiguity_constraints()
and
add_feature_contiguity_constraints()
to be compatible with updates to
the igraph package.write_problem()
so that it can use the gurobi package to write
problems (if desired). This substantially reduces run time, because writing
problems using the Rsymphony packages also requires solving them.presolve_check()
to throw warning if a problem has a single feature
(#309). Thanks to Sandra Neubert (@sandra-neubert) for code contribution.print()
and summary()
for problem()
objects so that all
text is printed at once (rather than sequentially).write_problem()
so that it works as expected (#312).problem()
, add_linear_constraints()
, add_linear_penalties()
,
add_locked_in_constraints()
, add_locked_out_constraints()
,
adjacency_matrix()
, binary_stack()
, category_layer()
,
connectivity_matrix()
,fast_extract()
, intersecting_units()
,
proximity_matrix()
, rij_matrix()
, simulate_data()
,
simulate_species()
, simulate_cost()
, and zones()
and other functions so
that they will throw an error if a categorical terra::rast()
object is
provided as an argument (#313). This is because categorical rasters are not
supported. Thanks to Martin Jung (@Martin-Jung) for bug report.problem()
not throwing multiple warnings with unusual data
(e.g., given cost and feature data with negative values, previously
only a single warning about negative costs would be thrown).
Thanks to Sandra Neubert (@sandra-neubert) for bug report.problem()
to be more memory efficient when using a sparse matrix
(dgCMatrix
) argument for the rij_matrix
parameter.Caused by error
instead of Caused by NULL
.add_locked_in_constraints()
and add_locked_in_constraints()
error
messages when supplying locked_in
and locked_out
objects
that do not spatially intersect with the planning units.scales::rescale()
to rescale such data.
However, we now realize that this approach can produce inconsistencies for
boundary length data (e.g., the total perimeter of a planning unit might not
necessarily equal the sum of the edge lengths). In some cases, these
inconsistencies can cause solutions generated with high boundary
penalties (i.e., using add_boundary_penalties()
with a high penalty
value) to contain a large reserve (i.e., a spatial cluster of selected of
planning units) with a single unselected planning unit in the middle of the
reserve. In the the worst case, these inconsistencies produce a situation
where increasing boundary penalties (i.e., generating multiple solutions with
add_boundary_penalties()
and increasing penalty
values)
does not alter the spatial configuration of solutions. Although use of
scales::rescale()
did not produce such behavior prior to version 8.0.0,
changes to the output format for boundary_matrix()
in subsequent versions
now mean that scales::rescale()
can cause these issues. We now recommend
using the new rescale_matrix()
function to rescale boundary length data to
avoid numerical issues, whilst also avoid such inconsistencies.rescale_matrix()
function to help with rescaling boundary length
(e.g., generated using boundary_matrix()
) and connectivity
(e.g., generated using connectivity_matrix()
) data so avoid
numerical issues during optimization (#297). Thanks to
Jason Flower (@jflowernet) and Joan Giménez Verdugo for bug reports.print()
and summary()
methods for problem()
objects
so that they will now better describe situations when the planning cost
data all contain a constant value (e.g., all costs equal to 1).rescale_matrix()
function
instead of the scales::rescale()
function for rescaling boundary
length and connectivity data (#297).add_neighbors_constraints()
so that it has an additional
clamp
argument so the minimum number of neighbors permitted for
each planning unit in the solution is clamped to the number of neighbors that
each planning unit has. For example, if a planning unit has 2 neighbors,
k = 3
, and clamp = FALSE
, then the planning unit could not
ever be selected in the solution. However, if clamp = TRUE
, then
the planning unit could potentially be selected in the solution if both of
its 2 neighbors were also selected.problem()
that prevents features
being supplied as
a data.frame
that contains feature names stored as a factor
(#295).
Thanks to Carl Boetigger (@cboettig) for bug report.problem()
so that it will throw a meaningful error message if the
user accidentally specifies the geometry column for sf
planning unit data
as a feature.rij_matrix()
so that it works when none of the raster layers being
processed fit into memory (#290). Thanks to Edwards Marc (@edwardsmarc) for
bug report.get_sim_pu_raster()
, get_sim_locked_in_raster()
,
get_sim_locked_out_raster()
, get_sim_zones_pu_raster()
,
get_sim_features()
, get_sim_zones_features()
).add_manual_locked_constraints()
and
add_manual_bounded_constraints()
so that the indices in the
specified in the argument data$pu
should consistently refer to the total
units. In other words, the indices in data$pu
should refer to the row
numbers (for planning units in sf
or data.frame
format) or cell numbers
(for planning units in Raster
or SpatRaster
format) of the planning units
that should be locked.solve.ConservationProblem()
so that it can be called directly
(#283). Thanks to Tin Buenafe (@SnBuenafe) for bug report.problem()
so that an error will be thrown if argument to features
contains only missing (NA
) values (e.g., an sf object is supplied that
has NA
values in all rows for a feature's column).raster::stack()
and sp::SpatialPolyonsDataFrame()
)
are still supported, the prioritizr package will now throw deprecation
warnings. Since support for the sp and raster package classes
will be fully deprecated and removed in a later version this year, we
recommend updating code to use the sf and terra packages.problem()
objects can now contain many more
constraints and penalties. Note that any problem()
objects
that were produced using earlier versions of the package are no longer
compatible. Thanks to Jason Flower (@jflowernet) for bug report on memory issues.library(sf)
).get_sim_pu_raster()
,
get_sim_pu_polygons()
, get_sim_pu_lines()
, get_sim_pu_points()
,,
get_sim_locked_in_raster()
, get_sim_locked_out_raster()
,
get_sim_zones_pu_raster()
, get_sim_zones_pu_polygons()
,
get_sim_phylogeny()
, get_sim_features()
, get_sim_zones_features()
).
These functions now return sf::st_sf()
,
terra::rast()
, ape::read.tree()
and zones()
objects.
Note that these functions are provided because data(...)
cannot be
used with terra::rast()
objects. See ?data
for more information.boundary_matrix()
output format has been updated. This means that
users will not be able to use boundary data generated using previous
versions of the package.add_lpsymphony_solver()
now throws an error, instead of a warning,
if an old version of the lpsymphony package is installed that is known
to produce incorrect results.marxan_boundary_data_to_matrix()
function is no longer compatible
with boundary data for multiple zones.distribute_load()
function has been deprecated, because it is no
longer used. For equivalent functionality, See parallel::splitIndices()
.new_optimization_problem()
and predefined_optimization_problem()
functions have been superseded by the new optimization_problem()
function.is.Waiver()
, add_default_decisions()
new_id()
, is.Id()
, print.Id()
, pproto()
."bad error message"
!print()
function for problem()
, optimization_problem()
, and
zones()
objects has been updated to provide more information.summary()
function to provide extensive detail on problem()
objects.add_feature_weights()
when applied to problems with
an add_max_phylo_div_objective()
or add_max_phylo_end_objectve()
.
Specifically, the bug meant that weights weren't being applied to
problems with these particular objectives.add_gurobi_solver()
documentation for opening vignette.add_extra_portfolio()
) default to generating 10 solutions.solve()
function will now output tibble::tibble()
objects
(instead of data.frame()
objects), when the planning unit data are
tibble::tibble()
objects.boundary_matrix()
function now uses terra::sharedPaths()
for
calculations, providing greater performance (#257). Thanks to Jason Flower (@jflowernet) for bug report.eval_ferrier_importance()
function can now be used with
any objective function that uses targets and a single zone.add_shuffle_portfolio()
and eval_replacement_importance()
functions.add_linear_penalties()
function so that the penalty parameter is applied
correctly (#342). In previous versions, this bug meant that solving a problem
with penalty = 1
would produce solution based on penalty = -1
(and vice
versa). Additionally, this bug also meant that compiling/solving a problem
multiple times would cause the formulation to alternate between using
penalty = 1
and penalty = -1
. Thanks to Carina Firkowski
(@Carina-Firkowski) for bug report.add_highs_solver()
function for the HiGHS optimization software (#250).add_default_solver()
to use the HiGHS solver if the Gurobi, IBM
CPLEX, and CBC solvers aren't available.add_default_solver()
so that the add_lpsymphony_solver()
is used
instead of add_rsymphony_solver()
.problem()
and eval_feature_representation_summary()
to avoid
needlessly converting sparse matrices to regular matrices (#252).NEWS.md
.boundary_matrix()
to use STR query trees by default.simulate_data()
, simulate_cost()
and simulate_species()
functions to improve performance using the fields package.add_cbc_solver()
to throw a segfault when solving
a problem wherein the rij_matrix(x)
has a zero amount for the last feature
in the last planning unit (#247). Thanks to Jason Everett (@jaseeverett) for
bug report.boundary_matrix()
to use the geos package (#218).simulate_cost()
and simulate_species()
so that they no longer
depend on the RandomFields package. Note that these functions will now
produce different outputs from previous versions (even when controlling
for the random number generator state).presolve_check()
function to (i) reduce chances of
it incorrectly throwing an error when the input data won't actually
cause any issues, and (ii) provide recommendations for addressing issues.add_min_largest_shortfall_objective()
so that
examples complete in a shorter period of time.x
that are numeric
or matrix
format, (ii)
x
that contain missing (NA
) values, and (iii) rij_matrix
that
are in dgCMatrix
format. This bug only occurred when all three of these
specific conditions were met. When it occurred, the bug caused planning units
with NA
cost values to receive very high cost values (e.g., 1e+300).
This bug meant that when attempting to solve the problem, the
presolve checks (per presolve_check()
) would throw an error complaining
about very high cost values (#236). Thanks to @lmathon for bug report.add_connectivity_penalties()
function and documentation so that
it is designed specifically for symmetric connectivity data.add_asym_connectivity_penalties()
function that is designed
specifically for asymmetric connectivity data. This function has been
created to help ensure that asymmetric connectivity data are handled
correctly. For instance, using asymmetric connectivity data with
add_connectivity_penalties()
function in previous versions of the package
sometimes resulted in the data being incorrectly treated as symmetric data.
Additionally, this function uses an updated mathematical formulation
for handling asymmetric connectivity so that it provides similar
results to the Marxan software (#223). Thanks to Nina Faure Beaulieu
(@ninzyfb) for bug report.add_locked_in_constraints()
and add_locked_out_constraints()
to ensure that a meaningful error message is provided when no planing
units are locked (#234). Thanks to Alec Nelson (@AlecNelson) for bug report.presolve_check()
so that it does not throw a meaningless warning
when the mathematical objective function only contains zeros.presolve_check()
to help reduce chances of mis-attributing
high connectivity/boundary values due to planning unit costs.marxan_problem()
function so that it can be used with asymmetric
connectivity data. This is now possible because there are dedicated functions
for symmetric and asymmetric connectivity.zones
parameter of the
add_connectivity_penalties()
function.eval_ferrier_importance()
(#220). Although this
function is now recommended for general use, the documentation
contained an outdated warning and so the warning has now been removed.eval_n_summary()
function now returns a table with
the column name "n"
(instead of "cost"
) for the number
of selected planning units (#219).marxan_problem()
for importing
Marxan data files.sim_pu_sf
and sim_pu_zones_sf
data given class
updates to the sf package (compatible with version 1.0.3+).write_problem()
function.eval_ferrier_importance()
function with verified code.presolve_check()
function to throw warning when
really high values specified in add_neighbor_constraints()
.Update add_cbc_solver()
function so that it can use a starting solution to reduce run time (via the start_solution
parameter).
add_linear_constraint()
function to add arbitrary constraints.add_min_shortfall_objective()
and
add_min_largest_shortfall_objective()
functions to handle targets with
a target threshold value of zero.eval_connectivity_summary()
function,
and tweaking the header in the README.problem()
function.add_gurobi_solver()
function so that it doesn't print excess
debugging information (accidentally introduced in previous version 7.0.1.1).add_gurobi_solver()
function to support the node_file_start
parameter for the Gurobi software. This functionality is useful solving large
problems on systems with limited memory (#192). Thanks to @negira and Alec Nelson (@AlecNelson) for bug reports and suggestions.write_problem()
function to save the mixed integer programming
representation of a conservation planning problem to a file. This
function is useful for manually executing optimization solvers.rij_matrix()
function documentation (#189).add_gurobi_solver()
function to allow specification of a starting
solution (#187). This functionality is useful for conducting a boundary
penalty parameter calibration exercise. Specifically, users can specify the
starting solution for a given penalty value based on the solution
obtained using a smaller penalty value.solve()
so it assigns layer names based on zone names for solutions in
raster format.add_cbc_solver()
so that time_limit
and verbose
parameters work
as expected.add_gurobi_solver()
function to report timings following the same
methods as the other solvers.add_lpsymphony_solver()
function to be more memory efficient (#183).add_default_solver()
so that add_cbc_solver()
is now preferred
over all other open source solvers.add_cbc_solver()
that resulted in incorrect solutions to
problems with equality constraints.add_cbc_solver()
function to generate solutions using the open source
CBC solver via the rcbc package (https://github.com/dirkschumacher/rcbc).add_rsymphony_solver()
and add_lpsymphony_solver()
functions to
have a default time_limit
argument set as the maximum machine integer for
consistency.add_rsymphony_solver()
, add_lpsymphony_solver()
, and
add_gurobi_solver()
functions to require logical
(TRUE
/FALSE
)
arguments for the first_feasible
parameter.add_default_solver()
function so that it prefers
add_lpsymphony_solver()
over add_rsymphony_solver()
, and
add_cbc_solver()
over all open source solvers.gap
parameter
for the add_rsymphony_solver()
and add_lpsymphony_solver()
corresponded
to the maximum absolute difference from the optimal objective value.
This was an error due to misunderstanding the SYMPHONY documentation.
Under previous versions of the package, the gap
parameter actually
corresponded to a relative optimality gap expressed
as a percentage (such thatgap = 10
indicates that solutions must be at
least 10% from optimality). We have now fixed this error and the documentation
described for the gap
parameter is correct. We apologize for any
inconvenience this may have caused.add_min_largest_shortfall()
objective function.solution
arguments are
supplied to the evaluation functions (#176). Thanks to Phil Dyer (@PhDyellow)
for bug report.sf
planning unit data.eval_
) to mention that
the argument to solution
should only contain columns that correspond to
the solution (#176). Thanks to Phil Dyer (@PhDyellow) for bug report.sf
data to documentation for importance
evaluation functions (#176).add_manual_targets()
documentation.eval_cost()
function to calculate the cost of a solution.eval_boundary()
function to calculate the exposed boundary length
associated with a solution.eval_connectivity()
function to calculate the connectivity associated
with a solution.eval_feature_representation()
function to assess how well each
feature is represented by a solution. This function is similar to the
deprecated eval_feature_representation()
function, except that it
follows conventions for other evaluation functions (e.g. eval_cost
).eval_target_representation()
function to assess how well each
target is met by a solution. This function is similar to the
eval_feature_representation()
, except that it corresponds to the targets
in a conservation planning problem.ferrier_score
function as eval_ferrier_importance()
function for
consistency.replacement_cost
function as eval_replacement_importance()
function
for consistency.rarity_weighted_richness
function as
eval_rare_richness_importance()
function for consistency.feature_representation()
function. It is now superseded by the
eval_feature_representation()
function.add_locked_out_constraints()
function to enable a single planning unit
from being locked out of multiple zones (when data are specified in raster
format).problem()
function to reduce memory consumption for sparse
matrix arguments (#164).add_cplex_solver()
function to generate solutions using
IBM CPLEX
(via the cplexAPI package).add_loglinear_targets()
and
loglinear_interpolation()
functions. Previously they used a natural
logarithm for log-linear interpolation. To follow target setting approaches
outlined by Rodrigues et al. (2004), they now use the decadic logarithm (i.e.
log10()
).add_gap_portfolio()
documentation to note that it only works for
problems with binary decisions (#159). Thanks to @kkemink for report.ferrier_score()
function. It no longer incorrectly
states that these scores can be calculated using CLUZ and now states
that this functionality is experimental until the formulation can be double
checked.--run-donttest
).feature_representation()
bug incorrectly throwing error with vector
planning unit data (e.g. sf
-class data).rij_matrix()
to throw an error for large raster data
(#151).add_linear_penalties()
to add penalties that penalize planning units
according to a linear metric.connectivity_matrix()
documentation to provide an example of how
to generate connectivity matrices that account for functional connectivity.solve()
function.solve()
function to the Salt Spring
Island and Tasmania vignettes.compile()
to throw warning when compiling problems that include
feature weights and an objective function that does not use feature weights.add_gurobi_solver()
function to provide more options for controlling
the pre-solve step when solving a problem.ferrier_score()
function to compute irreplaceability scores following
Ferrier et al (2000).proximity_matrix()
function to generate matrices indicating which
planning units are within a certain distance of each other (#6).add_extra_portfolio()
, add_top_portfolio()
, add_gap_portfolio()
functions to provide specific options for generating portfolios (#134).connected_matrix()
function to adjacency_matrix()
function to
follow the naming conventions of other spatial association functions (#6).set_number_of_threads()
, get_number_of_threads()
, and
is.parallel()
functions since they are no longer used with new data
extraction methods.add_pool_portfolio()
function because the new
add_extra_portfolio()
and add_top_portfolio()
functions provide this
functionality (#134).intersecting_units
and fast_extract
functions to use the
exactextractr and fasterize packages to speed up raster data extraction
(#130).boundary_matrix()
function when handling SpatialPolygon
planning unit data that contain multiple polygons (e.g. a single planning unit
contains to two separate islands) (#132).add_rsymphony_solver()
and add_lpsymphony_solver()
throwing an
an infeasible error message for feasible problems containing continuous or
semi-continuous variables.presolve_check()
function more informative (#124).
Thanks to Amanda Liczner (@aliczner) for bug report.rij_matrix()
so that amounts are calculated correctly for
vector-based planning unit data.fast_extract()
.add_locked_in_constraints()
and add_locked_out_constraints()
functions so that they no longer throw an unnecessary warning when
when they are added to multi-zone problems using raster data with NA
values.add_locked_in_constraints()
and
add_locked_out_constraints()
functions to provide recommended practices
for raster data.rarity_weighted_richness()
returning incorrect scores when
the feature data contains one feature that has zeros amounts in all planning
units (e.g. the tas_features
object in the prioritizrdata package;
#120).add_gurobi_solver()
returning solution statuses that are
slightly larger than one (e.g. 1+1.0e-10) when solving problems with
proportion-type decisions (#118). Thanks to Martin Jung (@Martin-Jung) for
bug report.add_manual_bounded_constraints()
function to apply lower and upper
bounds on planning units statuses in a solution (#118). Thanks to Martin Jung
(@Martin-Jung) for suggestion.replacement_cost()
function to use parallel processing to speed up
calculations (#119).add_gurobi_solver()
, add_lpsymphony_solver()
, and
add_rsymphony_solver()
functions so that they will not return solutions with
values less than zero or greater than one when solving problems with
proportion-type decisions. This issue is the result of inconsistent precision
when performing floating point arithmetic (#117). Thanks to Martin Jung
(@Martin-Jung) for bug report.add_locked_in_constraints()
and add_locked_out_constraints()
functions to provide a more helpful error message the locked_in
/locked_out
argument refers to a column with data that are not logical (i.e.
TRUE
/FALSE
; #118). Thanks to Martin Jung (@Martin-Jung) for bug report.solve()
function to throw a more accurate and helpful error
message when no solutions are found (e.g. due to problem infeasibility or
solver time limits).add_max_phylo_objective()
function to
add_max_phylo_div_objective()
.add_max_phylo_end_objective()
function to maximize the phylogenetic
endemism of species adequately represented in a prioritization (#113).
Thanks to @FerreiraPSM for suggestion.sim_phylogeny
).add_max_phylo_end_objective()
, replacement_cost()
, and
rarity_weighted_richness()
functions to the Prioritizr vignette.add_max_phylo_div_objective()
function.replacement_cost()
function to calculate irreproducibility scores
for each planning unit in a solution using the replacement cost method (#26).rarity_weighted_richness()
function to calculate irreproducibility
scores for each planning unit in a solution using rarity weighted richness
scores (#26).irreplaceability
manual entry to document functions for calculating
irreproducibility scores.add_min_shortfall_objective()
function to find solutions that minimize
target shortfalls.problem()
tests so that they work when no solvers are installed.add_min_shortfall_objective()
function to main vignette.feature_representation()
function now requires missing (NA
) values for
planning unit statuses in a solution for planning units that have missing
(NA
) cost data.presolve_check()
function to investigate potential sources of numerical
instability before trying to solve a problem. The manual entry for this
function discusses common sources of numerical instability and approaches
for fixing them.solve()
function will now use the presolve_check()
function to
verify that problems do not have obvious sources of numerical instability
before trying to solve them. If a problem is likely to have numerical
instability issues then this function will now throw an error (unless
the solve(x, force = TRUE)
).add_rsymphony_solver()
function now uses sparse matrix formats so that
attempts can be made to solve large problems with SYMPHONY---though it is
unlikely that SYMPHONY will be able to solve such problems in a feasible
period of time.tibble::as.tibble()
instead of tibble::as_tibble()
.solve()
(#110). Thanks to Martin Jung (@Martin-Jung) for
suggestion.add_boundary_penalties()
and
add_connectivity_penalties()
function (#106).add_rsymphony_solver()
and add_lpsymphony_solver()
sometimes returned infeasible solutions when subjected to a
time limit (#105). Thanks to @magalicombes for bug report.ConservationProblem-class
objects. These methods were implemented to be
used in future interactive applications and are not currently used in the
package. As a consequence, these bugs do not affect the correctness of
any results.bad error message
error being thrown when input rasters are not
comparable (i.e. same coordinate reference system, extent, resolutions, and
dimensionality) (#104). Thanks to @faengl for bug report.solve()
printing annoying text about tbl_df
(#75). Thanks to
Javier Fajardo (@javierfajnolla) for bug report.add_max_features_objective()
example code.add_neighbor_constraints()
and
add_contiguity_constraints()
functions used more memory than they actually
needed (#102). This is because the argument validation code converted sparse
matrix objects (i.e. dgCMatrix
) to base objects (i.e. matrix
) class
temporarily. This bug only meant inefficient utilization of computer
resources---it did not affect the correctness of any results.add_mandatory_allocation_constraints()
function. This function can be
used to ensure that every planning unit is allocated to a management zone in
the solution. It is useful when developing land-use plans where every single
parcel of land must be assigned to a specific land-use zone.$find(x)
method for Collection
prototypes that caused
it to throw an error incorrectly. This method was not used in earlier versions
of this package.add_mandatory_allocation_constraints()
to the Management Zones and
Prioritizr vignettes.feature_representation()
function that caused the "amount_held"
column to have NA values instead of the correct values. This bug only
affected problems with multiple zones.category_layer()
function that
it this function to incorrectly throw an error claiming that the input
argument to x
was invalid when it was in fact valid. This bug is
encountered when different layers the argument to x
have non-NA values in
different cells.add_contiguity_constraints()
function now uses sparse matrix formats
internally for single-zone problems. This means that the constraints
can be applied to single-zoned problem with many more planning units.add_connectivity_penalties()
function now uses sparse matrix formats
internally for single-zone problems. This means that connectivity penalties
can be applied to single-zoned problem with many more planning units.add_max_utility_objective()
and
add_max_cover_objective()
functions to make it clearer that they
do not use targets (#94).add_locked_in_constraints()
and add_locked_out_constraints()
that incorrectly threw an error when using logical
locked data
(i.e. TRUE
/FALSE
) because it incorrectly thought that valid inputs were
invalid.add_locked_in_constraints()
, add_locked_out_constraints()
,
and add_manual_locked_constraints()
where solving the same problem object
twice resulted in incorrect planning units being locked in or out of the
solution (#92). Thanks to Javier Fajardo (@javierfajnolla) for bug report.feature_abundances()
that caused the solve function to throw an
error when attempting to solve problems with a single feature.add_cuts_portfolio()
that caused the portfolio to return
solutions that were not within the specified optimality gap when using the
Gurobi solver.add_pool_portfolio()
function.feature_representation()
function now allows numeric
solutions with
attributes (e.g. when output by the solve()
function) when calculating
representation statistics for problems with numeric
planning unit data
(#91). Thanks to Javier Fajardo (@javierfajnolla) for bug report.add_manual_targets()
function threw a warning when some features had
targets equal to zero. This resulted in an excessive amount of warnings. Now,
warnings are thrown for targets that are less then zero.problem()
function sometimes incorrectly threw a warning that feature
data had negative values when the data actually did not contain negative
values. This has now been addressed.problem
function now allows negative values in the cost and feature
data (and throws a warning if such data are detected).add_absolute_targets()
and add_manual_targets()
functions now allow
negative targets (but throw a warning if such targets are specified).compile
function throws an error if a problem is compiled using
the expanded formulation with negative feature data.add_absolute_targets()
function now throws an warning---instead of an
error---if the specified targets are greater than the feature abundances
in planning units to accommodate negative values in feature data.add_max_cover_objective()
in prioritizr vignette (#90).add_loglinear_targets()
function now includes a feature_abundances()
parameter for specifying the total amount of each feature to use when
calculating the targets (#89). Thanks to Liz Law (@lizlaw) for suggestion.add_relative_targets()
documentation now makes it clear that locked out
planning units are included in the calculations for setting targets (#89).feature_abundances()
function to calculate the total amount of each
feature in the planning units (#86). Thanks to Javier Fajardo
(@javierfajnolla) for suggestion.add_cuts_portfolio()
function uses the Gurobi solution pool to
generate unique solutions within a specified gap of optimality when tasked
with solving problems with Gurobi (version 8.0.0+) (#80).add_pool_portfolio()
function to generate a portfolio of solutions using
the Gurobi solution pool (#77).boundary_matrix()
function now has the experimental functionality to
use GEOS STR trees to speed up processing (#74).feature_representation()
function to how well features are represented
in solutions (#73).solve()
function printing superfluous text (#75).problem()
function.sim_pu_zones_stack
, sim_pu_zones_polygons
,
and sim_features_zones
for exploring conservation problems with
multiple management zones.zones
function and Zones
class to organize data with multiple
zones.add_manual_targets()
function for creating targets that pertain to
multiple management zones.add_manual_locked_constraints()
function to manually specify which
planning units should or shouldn't be allocated to specific zones in
solutions.binary_stack()
, category_layer()
, and category_vector()
functions
have been provided to help work with data for multiple management zones.problem()
function now accepts Zone
objects as arguments for
feature
to create problems with multiple zones.add_relative_targets()
and add_absolute_targets()
functions for adding
targets to problems can be used to specify targets for each feature in
each zone.solve()
function now returns a list
of solutions when generating
a portfolio of solutions.zones
parameter) and specify how they
they should be applied (using the data
parameter. All of these functions
have default arguments that mean that problems with a single zone
should have the same optimal solution as problems created in the earlier
version of the package.add_locked_in_constraints()
and add_locked_out_constraints()
functions for specifying which planning units are locked in or out
now accept matrix
arguments for specifying which zones are locked
in or out.add_feature_weights()
function can be used to weight different
the representation of each feature in each zone.?prioritizr
), and README.marxan_problem()
has been updated with more comprehensive documentation
and to provide more helpful error messages. For clarity, it will now only
work with tabular data in the standard Marxan format.add_boundary_penalties()
(#62). Thanks to Liz Law
(@lizlaw) for report.add_locked_in_constraints()
and add_locked_out_constraints()
throw an exception when used with semi-continuous-type decisions (#59).compile()
thrown when the same planning unit is locked in
and locked out now prints the planning unit indices in a readable format.add_locked_in_constraints()
and add_locked_out_constraints()
are ignored when using proportion-type decisions (#58).predefined_optimization_problem()
which incorrectly recognized
some inputs as invalid when they were in fact valid.R CMD check
related to proto package in Depends.add_lpsymphony_solver()
to throw warnings to alert users to
potentially incorrect solutions (partially addressing #40).add_*_objectives
now pass when executed with slow solvers
(partially addressing #40).compile()
to work when no solvers are installed (#41).add_*_solvers
are now unbounded and can accept values
larger than 1 (#44).add_max_cover_objective()
function has been renamed to the
add_max_utility_objective()
, because the formulation does not follow the
historical formulation of the maximum coverage reserve selection problem
(#38).add_max_cover_objective()
function now follows the historical maximum
coverage objective. This fundamentally changes add_max_cover_objective()
function and breaks compatibility with previous versions (#38).add_lpsymphony_solver()
examples and tests to skip on Linux
operating systems.add_lpsymphony_solver()
causing error when attempting to solve
problems.numeric
vector data that caused an error.numeric
vector input with rij data
containing NA values.cran-comments.md
file.apply_boundary_penalties()
and add_connectivity_penalties()
causing the function to throw an error when the number of boundaries/edges is
less than the number of planning units.boundary_matrix()
calculations (#30).ScalarParameter
and ArrayParameter
prototypes to check
that functions for generating widgets have their dependencies installed.numeric
planning unit data and portfolios that caused the
solve()
to throw an error.add_max_phylo_objective()
(#24).Spatial*DataFrame
input to marxan_problem()
would always
use the first column in the attribute table for the cost data. This bug is
serious so analysis that used Spatial*DataFrame
inputs in
marxan_problem()
should be rerun.problem()
objects.add_cuts_portfolio()
on Travis.add_cuts_portfolio()
and add_shuffle_portfolio()
tests on CRAN.data.frame
and Spatial*DataFrame
objects
are now stored in columns named "solution_*" (e.g. "solution_1")
to store multiple solutions.README.Rmd
for examples on accessing this
information.verbose
argument to all solvers. This replaces the verbosity
add_lpsymphony_solver()
and add_rsymphony_solver()
is reduced.add_gurobi_solver.R
, add_lpsymphony_solver.R
,
add_rsymphony_solver.R
, and solvers.R
.
argument in add_lpsymphony_solver()
and add_rsymphony_solver()
.ConservationProblem$print()
now only prints the first three species names
and a count of the total number of features. This update means that
ConservationProblem
objects with lots of features can now safely be printed
without polluting the R console.time_limit
.marxan_problem()
to work with absolute file paths and the INPUTDIR
in Marxan input files (#19). Thanks to Dan Rosauer (@DanRosauer) for bug
report.solve()
when the rij data does not contain the highest planning
unit identifier specified when building the problem()
(#20).devtools::build_vignettes()
. Earlier versions needed the vignettes to be
compiled using the Makefile to copy files around to avoid tangled R code
causing failures during R CMD CHECK. Although no longer needed, the vignettes
can still be compiled using the shell command make vigns
if
desired.README.Rmd
now lives in the top-level directory following standard
practices. It should now be complied using rmarkdown::render("README.Rmd")
or using the shell command make readme
. Note that the figures for
README.md
can be found in the directory man/figures
.prshiny
will now only be run if executed during an
interactive R session. Prior to this R CMD CHECK would hang.quick_start.Rmd
showing how to run
marxan_problem()
using input data.frame()
objects.quick_start.Rmd
counting number of selected planning
unitsREADME.Rmd
tweaks to make it look prettier on website.compile()
function.problem.data.frame
that meant that it did not check for missing
values in rij$pu
.add_absolute_targets()
and add_relative_targets` related to their
standardGeneric being incorrectly definedadd_corridor_targets()
when argument connectivities
is a
list
. The elements in the list are assumed to be dsCMatrix
objects
(aka symmetric sparse matrices in a compressed format) and are coerced
to dgCMatrix
objects to reduce computational burden. There was a typo,
however, and so the objects were coerced to dgCmatrix
and not dgCMatrix
.
This evidently was ok in earlier versions of the RcppArmadillo and/or
Matrix packages but not in the most recent versions.problem()
causing node stack overflows (#21). Thanks to Dan
Rosauer (\DanRosauer) for bug report.parallel::detectCores()
returns NA
on some systems
preventing users from using the Gurobi solver--even when one thread is
specified.structure(NULL, ...)
with structure(list(), ...)
.new_waiver()
.add_default_objectives()
and add_default_targets()
private functions.add_default_decisions()
and add_default_solver()
to own help file.rij_matrix()
duplicating feature data (#13).add_corridor_constraints()
that fails to actually add the
constraints with argument to connectivity
is a list.make install
command so that it now actually installs the
package.