I have a collection of samples (about 3000) with their resulting values originating from a quite expensive simulation. (the model takes a 32 dimensional input X, and returns 10 dimensional output Y, and each function evaluation takes about 8cpu hours).
These sample originate partly from, Monte Carlo sampling, Sobol sampling, Latin hypercube sampling (so far so good), but also for about 60% from the history of global optimization studies. (I understand that the latter ones contain quite same samples that are not randomly distributed, but since i used metaheuristic methods, they also contain a portion of quite random like samples, how can i distinguish them from another?).
More specificaly: i like to use a Global Sensitivity Analysis function, that requires independent, random or quasi random samples. I would like to use it with as many as possible samples from my total collection. Are there ways to asses the randomness of a given sample set?, or are there maybe some sort of filters, that can filter out the samples that make the collection biassed?
I tried to google, but i think i dont know the right terminology for my problem. Thanks in advance for any help.