boost::polygon 的分段错误

我已经处理了多边形数据.现在我想看看我处理的数据与原始数据的匹配程度.对于这个任务,我想使用 BOOST 的多边形集合运算符.下面的代码给了我一个段错误:

I have got polygon-data which I processed. Now I want to see how well my processed data fits my original data. For this task I want to use BOOST's polygon set-operators. The following code gives me a segfault though:

#include <iostream>
#include <boost/polygon/polygon.hpp>

using namespace boost::polygon::operators;
using namespace std;

typedef boost::polygon::polygon_data<double> BPolygon;
typedef boost::polygon::polygon_traits<BPolygon>::point_type BPoint;
typedef boost::polygon::polygon_set_data<double> BPolygonSet;
typedef std::vector<BPolygon> BPolygonVec;


double meassureError(BPolygonVec &polys1, BPolygonVec &polys2)
{
  BPolygonSet set1;
  BPolygonSet set2;

  assign(set1, polys1);
  assign(set2, polys2);

  return area(set1 ^ set2);
}

int main(int argc, char *argv[])
{
  BPolygonVec polys1;
  BPolygonVec polys2;

  loadPolysFromFile(polys1);
  loadPolysFromFile(polys2);

  cout << meassureError(polys1, polys2) << endl;
  return 0;
}

gdb 输出:

Program received signal SIGSEGV, Segmentation fault.
0x08156ce7 in std::list<boost::polygon::point_data<double>, std::allocator<boost::polygon::point_data<double> > >::begin (this=0x0) at /usr/include/c++/4.8.2/bits/stl_list.h:759
759           { return iterator(this->_M_impl._M_node._M_next); }

我的数据由大约 2000 个多边形组成,每个多边形大约有 10 个顶点,我希望有足够的内存来处理它.我究竟做错了什么?感谢您的帮助!

My data consists of about 2000 polygons with roughly 10 vertices each and I would expect to have enough memory in order to process that. What am I doing wrong? Thanks for your help!

推荐答案

来自文档:http://www.boost.org/doc/libs/1_55_0/libs/polygon/doc/index.htm

坐标数据类型是所有数据类型的模板参数,库提供的算法,预计是完整的.不支持浮点坐标数据类型由于在库中实现的算法实现浮点稳健性意味着一组不同的算法和通常平台特定的关于浮动的假设点表示.

The coordinate data type is a template parameter of all data types and algorithms provided by the library, and is expected to be integral. Floating point coordinate data types are not supported by the algorithms implemented in the library due to the fact that the achieving floating point robustness implies a different set of algorithms and generally platform specific assumptions about floating point representations.

相关文章