As to rest of what you are alleging -- there is wide spectrum of views about all of my questions. Many highly qualified theorists [e.g. Richard Lindzen] -- tend to take the GISS surface temperature record at face value -- they have objections about what is an is not included in the models. However the surface temperature record as "packaged" by entities such as GISS is full of at least negligence if not corruption in the "adjustments to the data". The raw data itself is full of stations which would not meet any reasonable criteria for good siting and maintenance practices. However, theorists can be excused for their lack of concern over these kinds of thing -- they don't really come into contact with data so its "nasty/dirty" features seem foreign to their mindset.
There are more subtle matters embedded in even the best of the "data packages" such as the loss of many of the Northern Hemisphere's coldest reporting stations post the end of the Soviet Union. It might have been "They pretend to pay us and we pretend to work" during the "Evil Empire" -- but at least there was some funding as the Soviet Union was quite justifiably proud of its standing in the scientific world. After the Soviet Union ended -- the former Republics then mostly relatively poor independent countries quickly abandoned many remote stations with human data collection. Even Russia abandoned stations which had been operated since the days of the Tzars.
In addition there is the now admitted "Climategate" involving outright scientific fraud by people like Michael Mann who changed data to suit their Politically inspired theoretical positions.
So if the traditional thermometry based land surface temperature record is in question and similar though different issues cloud the ocean surface temperature record -- what about the satellites? Satellite Microwave Radiometer data gathering has been fine tuned to an amazing level by criticisms and improvements as a result of the criticism -- BUT its of quite limited duration the continuous cross calibrated data sets only go back about 25 years and there is the matter that each measurement integrates the temperature profile and so can not provide a direct comparison with ground based measurements.
There is one additional source of ground temperature data and it comes paradoxically from balloon-borne radiosondes. Before a weather balloon is launched a calibration reading is taken while it is still tethered at an altitude above the ground just slightly above the standard height for ground based thermometry. This practice began during the IGY and so the data set goes back to the late 1950's. However, as satellites and radar data has taken a larger role in weather forecasting the weather balloons are fewer and further between -- the nearest launch site is Chatham on the Cape.
US Navy EC's [the Navy version of AWACS] do take temperature data on the climb to their operational altitudes -- but the profiles are inconsistent.
Finally, there are proxies [tree rings] which need to be interpreted due to other factors and anecdotal evidence such as the varying price of the leases for Scottish Highland fields and the price of goods sold by Cistercian Monks.
All of the above contributes to real questions about the historic climate.
And if you can't really be clear of the historic climate -- then all of the rest becomes pure speculation and not science.
The above is a small sample behind my suggestion of a 50 year plan:
- 10 years of prep
- to develop a good set of instrumentation -- designed for widespread deployment over all terrains, and to give automated directly accessible over the web global coverage
- simultaneously a well-funded Challenge to develop reliable satellite based ground measurements
- simultaneous development of commercial air carrier based mid Tropospheric data collection system
- simultaneous selection of best of the existing data sets -- freed of political adjustments
- 10 years to get all the equipment working and collect a baseline [equivalent to the balloon still being tethered] for the models to tweak on
- 10 years for the Modelers to run and predict the recently gathered data -- followed by bracket run-off to pick the most reliable models based on predicting the past
- 20 years to see if anything is really happening which is outside normal variability
as an aside I suggest the following website as good source for the dissussion of the issues from a number of different perspectives:
https://wattsupwiththat.com/