For those who don't know how science works:
it works with datasets. The data are never perfect, for example you can measure 20 degrees on one spot and a person a little further will measure 21 degrees. This gives a distribution of data... many measurements will be around 20 degrees and some a little higher, some a little lower. Suppose this set of measurements has 10% error.
Now we have a different dataset, for example the amount of CO2 in the atmosphere. Suppose this also has 10% uncertainty.
Now we have yet another dataset, for example the average oceanic water level. Suppose we have 30% uncertainty here.
The next step is to fit all of this to some model. Science works with theoretical models. These models include everything we know about natural processes and are as accurate as possible. It shouldn't get too perfect, because we cannot recreate reality, it should only be just as complex enough to be able to describe what we see around us.
Now... if we take the model and try to predict one dataset, we'll get a model with say a 10% uncertainty.
Now we take another dataset and apply it to the same model. We find that this model is also a good match to that other dataset. It now matches both datasets. The resulting uncertainty isn't 10%, but it's less, because we've fitter to BOTH datasets. The uncertainty is less, for example it's 5% due to the relative sensitivity of the model to the different datasets.
Now we take the 3rd dataset and apply it to the same model. We find that the model is also a good fit to those data. Because the data are independent, it gives us an additional constraint and reduces the overall uncertainty in the model even more, for example down to 3%.
If we repeat this for more and more datasets, we reduce the uncertainty each time. The more types of data we add, the less the resulting uncertainty.
In other words... noise in the data is reduced. We can distinguish noisy parts of on type of data better with the help of other data.
That's the beauty of IPCC and climate models. They gather information from various fields and while each field doesn't know for certain, the combination of all the knowledge will give a lot of confidence about the model.
That's how science works.
There's one caveat: models can be made such that they'll fit any data... that's the tricky part to avoid. However, models are tested for their sanity and robustness, and by now there are a pretty lot of data type that have to be fitted and this kind of eliminates this threat.
And this gives us the best possible state of knowledge we have at the moment. It's a culmination of all types of data and all our knowledge of how natural systems work.
To just disregard such knowledge because of a bit of noisiness, that's a bit crude to say the least and doesn't do justice to a major effort.