Mining data, by its very nature, is rarely collected in a regular pattern; it is human nature, and very good business sense, to take more samples in the higher-grade parts of orebodies. As a consequence of this data for resource evaluation is almost always clustered, as shown in the diagram below. While the most common method of grade estimation, ordinary kriging (OK), inherently declusters the input data through the point-to-point covariance matrix, other estimation methods, such as inverse distance modelling, do not decluster the data for the purposes of estimation, and this can sometimes lead to biased results. I once reviewed a low grade nickel laterite estimate where the grade had been estimated by the inverse distance algorithm. The data was not particularly clustered, but the clustering effect was nonetheless still significant, to the extent that the grade dropped by 0.2% nickel when declustering was introduced as a precursor to the grade estimation.
Even when validating a conventional OK model we need to compare the model grades against the declustered drilling; to do otherwise would be to perhaps wrongly accept a biased grade estimate. The diagram below shows the influence of declustering on a data set of palladium drill composites, and plots the cell volume from a cell declustering routine (on the x axis) against the declustered mean (on the y axis). The naïve mean or the average of the data without any declustering is about 290 ppb; the interpretation of the graph is that after declustering, the mean of the sample data will be as low as 215 ppb – in other words, in a comparison with an estimated grade the undeclustered mean, if used, may overestimate the true estimated grade by up to 26%.
There are many methods of declustering a set of sample data, whether for validation or estimation purposes. These range from a simple subjective removal of clustered drillholes, which is time-consuming and difficult to execute correctly, to the summation of kriging weights relating to each sample and using this as a declustering key. The most common method is perhaps the cell declustering approach (illustrated above, right), which places a three-dimensional grid upon the data and applies a weight in proportion to the number of data points per cell. This method, although often successful, still relies a subjective decision in the choice of the declustering cell volume, hence the need for calibration graphs as shown. A more objective approach, with (in theory) only one solution, is to generate three-dimensional polygons of influence (i.e. extending halfway to the next sample) around all of the data points, and then to use the polygon volume as a data weighting factor. This requires some nifty programming or a routine in your generalised mining package dedicated to the purpose.
For the purposes of grade estimation using inverse distance (or even for OK where the data is strongly clustered), there is a need to apply a data selection algorithm to the data selected for estimation to ensure that one hole, or one set of face sampling data, does not overwhelm the grade estimate and thus induce a bias. In this case variants of the octant search are implemented in most mining software; data is selected and ‘thinned out’ according to a (sometimes complex) set of rules relating the number of sectors to be filled with data and the minimum and maximum numbers of data per sector. The octant search is highly subjective and the results can vary greatly according to the choice of parameters. The best advice is to undertake an analysis using a range of likely parameters and monitor how robust the output is to the varying input conditions.
The takeaway message is to understand that you ignore clustered data, either from a validation or from an estimation input aspect, at your peril!