Jump to content

Time Blocks Science from Publication


AFTiger

Recommended Posts

2 hours ago, johnnyAU said:

I never said I didn't think we should continue to develop more efficient fuels, energy sources, recycling programs, cleaner air/water, etc...All good things.

Installing inefficient solar arrays and wind farms all over the landscape (that also require energy to manufacture, and use less than desirable materials in the process, and required continual maintenance) which cannot sustain dense power grids,  and drive skyrocketing energy prices to the point many cannot afford to heat or cool their homes in the event of extreme weather, under the guise of CAGW may not be the best use of our time or resources.  Scaring kids with alarmist BS, allowing them to skip school for extended periods of time, and using them as puppets/mascots for political purposes probably not the best approach either.  I'd add, that ships of fools getting caught in the arctic ice every year or two, or gluing yourselves to the tarmac in desperation might also not be the wisest of choices.

I invested in one of those "inefficient" solar farms via my electric coop.  It has a 5 year pay back and everything from then on is pure savings. 

Coal fired electric plants operate at a thermal efficiency of about 37%.  Solar panels currently operate at an efficiency of about 15% but without introducing more carbon into the atmosphere.  Technical advances (being advanced by engineers other than yourself) will improve this efficiency over time (see below).

As an engineer, you need to keep up with technological research.  You are very short sighted. :no:

 

 

Engineers Just Created The Most Efficient Solar Cells Ever

Australian engineers have taken us closer than ever before to the theoretical limits of sunlight-to-electricity conversion, by building photovoltaic cells that can harvest an unheard-of 34.5 percent of the Sun's energy without concentrators - setting a new world record.

The previous record of 24 percent was held by a large, 800-square centimetre solar cell produced by a US company, but these new photovoltaic cells aren't only more efficient, they also cover far less surface area, which means they're going to make solar power even cheaper.

"This encouraging result shows that there are still advances to come in photovoltaics research to make solar cells even more efficient," said one of the researchers, Mark Keevers, from the University of New South Wales (UNSW) in Sydney. "Extracting more energy from every beam of sunlight is critical to reducing the cost of electricity generated by solar cells as it lowers the investment needed, and delivering payback faster."

This UNSW team is the same one that set a new solar conversion record back in 2014, by using mirrors to concentrate sunlight and achieve 40 percent efficiency. But this new record is even more impressive, because it didn't involve any concentration, and it was something engineers hadn't expected to achieve for several decades.

"A recent study by Germany’s Agora Energiewende think tank set an aggressive target of 35 percent efficiency by 2050 for a module that uses un-concentrated sunlight, such as the standard ones on family homes," said one of the researchers, Martin Green. "So things are moving faster in solar cell efficiency than many experts expected." 

The new cell is only 28 square centimetres (11 square inches) and it works by splitting the incoming sunlight into four bands.

The infrared band of that light is reflected back towards a silicon solar cell, and the other three bands are directed into a three-layer, new type of solar cell, made of: indium-gallium-phosphide; indium-gallium-arsenide; and germanium.

The sunlight passes through each of these layers, or junctions, and energy is extracted by each at its most efficient wavelength. Any unused light passes on to the next layer, and so on, to squeeze the most out of every single beam.

You can see a diagram of what that looks like below:

solar prism diagram 1UNSW Engineering

To be clear, these four-junction solar cells aren't likely to end up on the rooftop of your home or office anytime soon - they're harder to maintain and more expensive than the standard single-junction solar cells we're used to seeing.

But this type of photovoltaic cell is ideal for solar towers, which use mirrors to concentrate sunlight onto a series of cells, and then convert that directly into electricity, often through heat - as is this case in this giant Moroccan solar plant. Or this record-breaking system in Sweden and Australia.

The team is now looking to scale-up its solar cells and see what kind of results they can achieve when they're 800-square centimetres in size, like the previous record holders.

Right now, the theoretical limit for a four-junction device is thought to be 53 percent, which means even with their tiny cell, the UNSW team is two-thirds of the way there. 

"There’ll be some marginal loss from interconnection in the scale-up, but we are so far ahead that it’s entirely feasible," said Keevers. 

The potential of this new technology, once it's applied to large-scale solar plants, is going to be pretty exciting for the already-dropping cost of solar electricity, and we can't wait to see what happens next.

The 34.5 percent efficiency record has been confirmed by the US National Renewable Energy Laboratory, and the researchers are awaiting peer review.

 

 

But like I said, let's revisit this in a couple of decades. 

Link to comment
Share on other sites





  • Replies 90
  • Created
  • Last Reply

Perhaps you should learn what actual efficiency means, what intermittent sources means and what energy density means. Then you may move on to what it takes to mine materials and what waste products are produced to build the wind mills, solar cells and batteries. Moderate increases in the efficiencies of wind and solar won't cut it, even if you believe they will. Both Germany and Australia are headed towards killing their economy by attempting to head towards 100% renewables. The sun doesn't always shine, and the wind doesn't always blow...and no, you cannot see CO2, even if young Greta says she can. 

image

Figure 1: Graph on global energy[1]

The world today is inhabited by close to 8 billion people and we feed our hunger for power to almost 80% with hydrocarbons (coal, gas, oil). Wind and solar make up an estimated 2% of 2017 primary energy, the remainder largely comes from nuclear, hydro and some biomass. Only a 100 years ago we were 2 billion people. Of today’s 8 billion people there are at least 3 billion with no or only erratic access to power… and global population will increase by another 3-4 billion within the next 50 years.

Now look at Figure 1 and extrapolate to the future. Do you believe that non-hydro renewables wind and solar will give us the energy we need? Can they sustainably and environmentally friendly power the future?

Solar and wind power are not new. However, over the decades we have improved their efficiency. The Betz Limit states that a blade can capture maximum 60% of kinetic energy in air – modern windmills have reached 45%. The Schockley-Queisser Limit states that at maximum 33% of incoming photons can be converted into electrons in silicon photovoltaic – modern PV reaches 26%. “The era of 10-fold gains is over[2]. There is no Moore’s Law in energy. It is time that we are take a whole-system view when looking at solar and wind.

image

Figure 2: Global prices for power – power in Germany is the most expensive[3]

Wind and solar are inherently intermittent means for power generation. They only work when the wind blows or the sun shines. We need to account for the cost of batteries or the cost of conventional power as backup for wind and solar when comparing the cost of power. None of the current Levelized Cost of Electricity (LCOE) measures account for this. Neither do standard LCOE measures account for (1) the additional cost of interconnections required, nor (2) the cost of managing networks with highly volatile energy inputs, nor (3) the efficiency losses resulting from keeping coal, gas, or nuclear power as backup. Number (3) is interesting and actually explains why the total cost of power goes up the more wind or solar you install beyond a certain point. What that certain point is depends on the country and region, but one thing is sure: Germany is beyond that point, illustrated by their high-power prices (Figure 2).

Only recently has the IEA developed a new way of measuring cost of electricity with what they call Value-Adjusted Levelized Cost of Electricity (VALCOE). In February 2019, the IEA writes “In India … using VALCOE… as the share of solar PV surpasses 10% in 2030, the value of [solar] daytime production drops and the value of flexibility increases.” Figure 4 below illustrates the misleading cost comparisons that the current LCOE would give vs. the more correct VALCOE.

image

Figure 3: Levelized cost of electricity (LCOE) and value-adjusted LCOE (VALCOE)
for solar PV and coal-fired power plants in India[4]

Germany has become aware that they need conventional power despite its huge wind and solar capacity installed (Germany’s installed wind and solar capacity by the end of 2018 was 59 GW for wind and 46 GW for solar or 51% of total German capacity; Germany’s wind and solar share was 17% for electricity and only 4,6% for primary power in 2018[5]). You might have heard that Germany decided to exit coal power in addition to exiting nuclear. Wind and solar will not suffice, thus Germany decided to build new gas-fired power plants instead. We know that gas is typically more expensive than coal, more difficult and expensive to transport than coal requiring pipelines or LNG, and generally more difficult and sometimes dangerous to store. What is the reason that Germany shuts down its existing coal-fired power plants and builds new gas-fired ones? Correct, the reason is green-house gas emissions. It is a very well-known fact that gas emits about half the CO2 per kWh during combustion than coal.

What appears to be a less-known fact is that gas emits/leaks methane (a 28x more powerful green-house gas than CO2 over a 100-year horizon and 84x more potent over a 20-year horizon6) during production and transportation. This has been documented in several studies including Poyry 20166. Figure 4 illustrates this fact and compares direct emissions (direct = during combustion) with indirect emissions (indirect = during production and transportation):

– Gas emits about half of CO2 compared to coal during combustion

– Gas emits more CO2eq. (mostly in form of methane) during production and transportation

– Total gas CO2eq.emissions are on par with coal, depending on the type turbine and the location of the power plant

image

Note: CO2eq Emissions for LNG or shale gas are significantly higher than for pipeline natural gas (PNG)

Figure 4: coal vs. natural gas – green-house gas emissions during partial load operation[6]

Batteries have become far more efficient and the recent move towards electrical vehicles has driven large investments in battery “Gigafactories” around the world. The largest known and discussed factory for batteries is Tesla’s USD 5 billion Gigafactory in Nevada which is expected to provide an annual battery production output of 50 GWh by 2020. Such factories will provide the batteries for our world’s electric vehicles and are supposed to provide backup batteries for houses (see Tesla’s Powerwall6).

Figure 5 below summarizes the environmental challenge of today’s battery technology. The problem with any known battery technology has to do with two main issues:

1) Energy density

2) Material requirements

Energy density: Hydrocarbons are one of the most efficient ways to store energy. Today’s most advanced battery technology can only store 1/40 of the energy that coal can store. This already discounts for the coal power plant efficiency of about 40%. Energy that a 540 kg 85 kWh Tesla battery can store equals the energy of 30 kg of coal. The Tesla battery must then still be charged with power (often through the grid) while coal is already “charged”.

In addition, you can calculate that one annual Gigafactory production of 50 GWh of Tesla batteries would be enough to provide backup for 6 minutes for the entire US power consumption. Today’s battery technology unfortunately cannot be the solution of intermittency.

Material requirements: Next comes the question of the inputs and materials required to produce a battery. It is expected and conservatively calculated that each Tesla battery of 85 kWh requires 25-50 tons of raw materials to be mined, moved and processed. These required materials include copper, nickel, graphite, cobalt and some lithium and rare earths. We will likely also need some aluminum and copper for the case and wiring. Additionally, energy of 10-18 MWh is required to build one Tesla battery, resulting in 15-20 t of CO2 emissions assuming 50% renewable power.

I am not even considering the overburden that needs to be moved for each ton of minerals mined. The overburden ratio can be estimated 1:10. Thus, you can 10x fold the numbers above. One Tesla battery requires 500-1.000 tons of materials to be moved/mined compared to coal which requires only 0,3 tons – a factor of 1.700 to 3.300!

image

Figure 5: case in point: Tesla‘s batteries – energy density & environmental impact[7]

This article cannot discuss the details of global warming. However, it is very worrying that young people are taught in school to fear the warming created by fossil-fuel burning. We had 1 degree of warming in the past 200 years. The “human cause” has much more to do with the heat that our existence (energy consumption) produces and releases to the biosphere rather than with CO2. The majority of warming is natural, caused by the sun as we are coming out of the Little Ice Age that ended about 300 years ago. We are not heading into a catastrophe, but we need to worry about real pollutants to our environment and the waste we create. This is where we should focus our attention and spend our resources.

Wind and solar – while certainly being appropriate for certain applications such as heating a pool (or a coop in your case), and thus earning a place in the energy mix – cannot and will not replace conventional power. We need a “New Energy Revolution”. To reach this New Energy Revolution we need to invest more in base research and at the same time invest in, not divest from, conventional power to make it efficient and environmentally friendly.

Link to comment
Share on other sites

Why baby boomers’ grandchildren will hate them

https://www.washingtonpost.com/opinions/2019/09/17/why-baby-boomers-grandchildren-will-hate-them/

 

17 hours ago, johnnyAU said:

 We are not heading into a catastrophe, but we need to worry about real pollutants to our environment and the waste we create. This is where we should focus our attention and spend our resources.

 

If the average temperature of the globe rises to 3 degrees by 2100 as predicted it will be a "catastrophe" in every practical sense.

Otherwise, I agree with your call for more research and planning to avoid it.  That's a BIG step from saying it's not a problem.

 

Link to comment
Share on other sites

1 hour ago, homersapien said:

Why baby boomers’ grandchildren will hate them

 

 

If the average temperature of the globe rises to 3 degrees by 2100 as predicted it will be a "catastrophe" in every practical sense.

Otherwise, I agree with your call for more research and planning to avoid it.  That's a BIG step from saying it's not a problem.

 

My children, and their children will be taught to think for themselves and adapt to a changing climate, whether it is extremely hot or extremely cold.   Either or neither could happen. We actually have no way of accurately predicting what the climate will be in 20,50, 100 etc... years.  Claiming you can, and believing you can tax it away is part of the problem.  

I do find it hilarious that you continue to use Wapo Opp-eds as a 'credible' source.  

My kids will be both prepared and thankful, as they'll have learned not to take politically-driven sensationalism at face value, and not conflate it with actual science. 

Link to comment
Share on other sites

I started this topic to point out the active censorship of those who question the "consensus" of scientific thought on climate change. By blocking alternative observations, true science is denied the search for facts. This censorship is a dangerous trend for it directs resources to unproven and some stupid solutions. The alarmists are practicing a form of Lysenko ism. https://en.wikipedia.org/wiki/Lysenkoism

Quote

NOAA Scientist Turns Climate Skeptic, Exposes Censorship & Bias
Published on August 3, 2019

Written by Joseph Valle


The “science is settled” alarmist media don’t want people to know there are scientists, even award-winning ones, who dispute the idea of catastrophic global warming.

Because outlets ignore and censor such scientists, curious individuals must turn to other sources such as English journalist James Delingpole’s columns or podcast, the Delingpod.

On the July 25 podcast, he interviewed award-winning, former National Oceanic and Atmospheric Administration (NOAA) scientist Dr. Rex Fleming about his conversion from global warming alarmism to skepticism.

The scientist also discussed the manipulation of data within NOAA, accusing a few individuals of “fiddling” with ocean and atmospheric data under the Obama Administration.

He also brought up the prominent scientific organizations’ censorship of viewpoints by refusing to publish skeptical scientific papers.

Fleming admitted that for years he supported and “funded projects” by scientists attributing global warming to carbon dioxide in spite of “having doubts” while working for NOAA.

“Eventually I just read enough to realize it’s a totally wrong direction,” he said. “And so, in the past ten years, I’d say, I’ve been on the other side.” His shifting views made it far more difficult to be published though.

Although Fleming holds an undergraduate degree in math and a Ph.D. in atmospheric science, he could not get published by prominent U.S. scientific groups.

He is also the author of The Rise and Fall of the Carbon Dioxide Theory of Climate Change.

Dr. Rex Fleming
Dr. Rex Fleming

According to Fleming, the American Meteorological Society, the American Geophysical Union, and the American Association for the Advancement of Science refuse to publish scientific papers from scientists (like him) it considers “deniers.”

So he had to travel to Europe to have his 2018 paper on climate change peer-reviewed and published.

Fleming suggested the reason more scientists aren’t shifting away from anthropogenic global warming theory (AGW) is that they’re “in this groove of getting funds for huge, bigger computer systems to run these massive climate models. And they want their salaries to increase. They don’t want to change.”

“It’s been a wonderful gravy train” for scientists since the 1970s, Fleming added.

Delingpole reiterated Fleming’s argument that carbon dioxide levels historically have risen due to warm temperatures, “not the other way around.”

“Correct,” Fleming replied.

Fleming also criticized alarmists for targeting the fossil fuel industry for several reasons. He said one political reason was to push socialism.

“They’re using a calamity as a measure to get people’s attention,” he said. “So the climate is a good one to use. Because the media and scientists have wrongly, without any proof, have assumed this is the problem.”

 

Link to comment
Share on other sites

On 9/19/2019 at 8:43 AM, AFTiger said:

I started this topic to point out the active censorship of those who question the "consensus" of scientific thought on climate change. By blocking alternative observations, true science is denied the search for facts. This censorship is a dangerous trend for it directs resources to unproven and some stupid solutions. The alarmists are practicing a form of Lysenko ism. https://en.wikipedia.org/wiki/Lysenkoism

 

If there was no more supporting information and/or research submitted with his rejected publication than what you have revealed here, there is little wonder that his paper was rejected.  :rolleyes:

Thanks to peer review and high standards, the acceptance rate of scientific publications is far lower than most people might appreciate.

But I am sure he can get one of the Koch foundations to publish it.  I am pretty sure they don't submit their stuff to peer review. ;D

Link to comment
Share on other sites

22 hours ago, AFTiger said:

Homer, when you publish a climate paper, let me know.

Whats your point?  It's not my fault that Flemming can't get his paper published by a reputable journal.

But like I said, someone will publish it, even if just himself.  If/when you find a link to it, please let me know so we can parse it for any actual science.

Link to comment
Share on other sites

1 hour ago, homersapien said:

Whats your point?  It's not my fault that Flemming can't get his paper published by a reputable journal.

But like I said, someone will publish it, even if just himself.  If/when you find a link to it, please let me know so we can parse it for any actual science.

I don't think you know real science. 

Link to comment
Share on other sites

1 hour ago, AFTiger said:

I will grant this; you are expert in buffoonery.

Yet you are the one who doesn't realize this forum is for substantive arguments, not vacuous insults.

Get back to me after Flemming publishes his paper.  I am looking forward to examining his arguments.

Link to comment
Share on other sites

I have come to understand this: To some, global warming is a religion. I look upon humans having much to do with climactic changes as a religious question, not a scientific question. At one point in the past glaciers a mile thick covered North America. At another time palm trees and other tropical organisms lived in Antarctica, their fossils are easily found there encased in the ice.

If our scratchings and spewings speed up or slow down such things a little it doesn't matter in the overall scheme of things.

Link to comment
Share on other sites

Sea level rise not universally accepted

Quote

The uncompromising verdict of Dr Mörner is that all this talk about the sea rising is nothing but a colossal scare story, writes Christopher Booker.

If one thing more than any other is used to justify proposals that the world must spend tens of trillions of dollars on combating global warming, it is the belief that we face a disastrous rise in sea levels. The Antarctic and Greenland ice caps will melt, we are told, warming oceans will expand, and the result will be catastrophe.

Although the UN’s Intergovernmental Panel on Climate Change (IPCC) only predicts a sea level rise of 59cm (17 inches) by 2100, Al Gore in his Oscar-winning film An Inconvenient Truth went much further, talking of 20 feet, and showing computer graphics of cities such as Shanghai and San Francisco half under water. We all know the graphic showing central London in similar plight. As for tiny island nations such as the Maldives and Tuvalu, as Prince Charles likes to tell us and the Archbishop of Canterbury was again parroting last week, they are due to vanish.

But if there is one scientist who knows more about sea levels than anyone else in the world it is the Swedish geologist and physicist Nils-Axel Mörner, formerly chairman of the INQUA International Commission on Sea Level Change. And the uncompromising verdict of Dr Mörner, who for 35 years has been using every known scientific method to study sea levels all over the globe, is that all this talk about the sea rising is nothing but a colossal scare story.

Despite fluctuations down as well as up, “the sea is not rising,” he says. “It hasn’t risen in 50 years.” If there is any rise this century it will “not be more than 10cm (four inches), with an uncertainty of plus or minus 10cm”. And quite apart from examining the hard evidence, he says, the elementary laws of physics (latent heat needed to melt ice) tell us that the apocalypse conjured up by
Al Gore and Co could not possibly come about.

The reason why Dr Mörner, formerly a Stockholm professor, is so certain that these claims about sea level rise are 100 per cent wrong is that they are all based on computer model predictions, whereas his findings are based on “going into the field to observe what is actually happening in the real world”.

 https://tinyurl.com/njnt5br

Link to comment
Share on other sites

Elsewhere 

Quote

A Clean Kill of the Carbon Dioxide-Driven Climate Change Hypothesis?

Guest geology by David Middleton

Way back in the Pleistocene (1976-1980), when I was a young geology student, the notion of CO2 as a driver of climate change was largely scoffed at…

Suggestion that changing carbon dioxide content of the atmosphere could be a major factor in climate change dates from 1861, when it was proposed by British physicist John Tyndall.

[…]

Unfortunately we cannot estimate accurately changes of past CO2 content of either atmosphere or oceans, nor is there any firm quantitative basis for estimating the the magnitude of drop in carbon dioxide content necessary to trigger glaciation.  Moreover the entire concept of an atmospheric greenhouse effect is controversial, for the rate of ocean-atmosphere equalization is uncertain.

Dott, Robert H. & Roger L. Batten. Evolution of the Earth. McGraw-Hill, Inc. Second Edition 1976. p. 441.

Sometime after 1980, a new paradigm emerged, suggesting that Phanerozoic Eon climate change had largely been driven by CO2 (Royer et al., 2004). The model was that the weathering rates of silicate rocks governed the atmospheric concentration of CO2 (Berner & Kothavala, 2001) and that CO2 was the “control knob” for temperature. Well, this paradigm may have just taken a bullet to the head.

Rutgers Today > Research
Is Theory on Earth’s Climate in the Last 15 Million Years Wrong?
Rutgers-led study casts doubt on Himalayan rock weathering hypothesis
September 22, 2019

A key theory that attributes the climate evolution of the Earth to the breakdown of Himalayan rocks may not explain the cooling over the past 15 million years, according to a Rutgers-led study.

The study in the journal Nature Geoscience could shed more light on the causes of long-term climate change. It centers on the long-term cooling that occurred before the recent global warming tied to greenhouse gas emissions from humanity.

“The findings of our study, if substantiated, raise more questions than they answered,” said senior author Yair Rosenthal, a distinguished professor in the Department of Marine and Coastal Sciences in the School of Environmental and Biological Sciences at Rutgers University–New Brunswick. “If the cooling is not due to enhanced Himalayan rock weathering, then what processes have been overlooked?”

For decades, the leading hypothesis has been that the collision of the Indian and Asian continents and uplifting of the Himalayas brought fresh rocks to the Earth’s surface, making them more vulnerable to weathering that captured and stored carbon dioxide – a key greenhouse gas. But that hypothesis remains unconfirmed.

Lead author Weimin Si, a former Rutgers doctoral student now at Brown University, and Rosenthal challenge the hypothesis and examined deep-sea sediments rich with calcium carbonate.

Over millions of years, the weathering of rocks captured carbon dioxide and rivers carried it to the ocean as dissolved inorganic carbon, which is used by algae to build their calcium carbonate shells. When algae die, their skeletons fall on the seafloor and get buried, locking carbon from the atmosphere in deep-sea sediments.

If weathering increases, the accumulation of calcium carbonate in the deep sea should increase. But after studying dozens of deep-sea sediment cores through an international ocean drilling program, Si found that calcium carbonate in shells decreased significantly over 15 million years, which suggests that rock weathering may not be responsible for the long-term cooling.

Meanwhile, the scientists – surprisingly – also found that algae called coccolithophores adapted to the carbon dioxide decline over 15 million years by reducing their production of calcium carbonate. This reduction apparently was not taken into account in previous studies.

Many scientists believe that ocean acidification from high carbon dioxide levels will reduce the calcium carbonate in algae, especially in the near future. The data, however, suggest the opposite occurred over the 15 million years before the current global warming spell.

Rosenthal’s lab is now trying to answer these questions by studying the evolution of calcium and other elements in the ocean.

 

Rutgers Today

Basically, everything is bass-ackwards relative to the CO2-driven climate paradigm.

As far as press releases go, this one is very good. I would only take serious issue with this bit:

Many scientists believe that ocean acidification from high carbon dioxide levels will reduce the calcium carbonate in algae, especially in the near future. The data, however, suggest the opposite occurred over the 15 million years before the current global warming spell.

The “current global warming spell” is indistinguishable from other Holocene and Pleistocene global warming spells.

neogene-t.png?resize=700%2C399&w=1204
Figure 1. High Latitude SST (°C) From Benthic Foram δ18O (Zachos, et al., 2001) and HadSST3 ( Hadley Centre / UEA CRU via www.woodfortrees.org) plotted at same scale, tied at 1950 AD. X-axis is in millions of years before present (MYA), older is toward the left.

We’ve already experienced nearly 1.0 ºC of warming since pre-industrial time.  Another 0.5 to 1.0 ºC between now and the end of the century doesn’t even put us into Eemian climate territory, much less the Miocene. 15 million years ago (MYA) was the middle of the Mid-Miocene Climatic Optimum (MMCO).

Their paper is pay-walled; here is the abstract:

Abstract
The globally averaged calcite compensation depth has deepened by several hundred metres in the past 15 Myr. This deepening has previously been interpreted to reflect increased alkalinity supply to the ocean driven by enhanced continental weathering due to the Himalayan orogeny during the late Neogene period. Here we examine mass accumulation rates of the main marine calcifying groups and show that global accumulation of pelagic carbonates has decreased from the late Miocene epoch to the late Pleistocene epoch even though CaCO3 preservation has improved, suggesting a decrease in weathering alkalinity input to the ocean, thus opposing expectations from the Himalayan uplift hypothesis. Instead, changes in relative contributions of coccoliths and planktonic foraminifera to the pelagic carbonates in relative shallow sites, where dissolution has not taken its toll, suggest that coccolith production in the euphotic zone decreased concomitantly with the reduction in weathering alkalinity inputs as registered by the decline in pelagic carbonate accumulation. Our work highlights a mechanism whereby, in addition to deep-sea dissolution, changes in marine calcification acted to modulate carbonate compensation in response to reduced weathering linked to the late Neogene cooling and decline in atmospheric partial pressure of carbon dioxide.

Weimin & Rosenthal Nature Geoscience

The assumption has been that the rise of the Himalayan Mountains during the Miocene increased the rate of silicate rock weathering, drawing down atmospheric CO2 and precipitously cooling the Earth’s atmosphere. While the Neogene cooling did follow the uplift of the Tibetan Plateau, the cool-down from the MMCO trailed the uplift by about 7 million years (Myr).

zachosetal2001_cenozoic-d18o_2.png?w=120
Figure 2. Figure 1. High Latitude SST (°C) From Benthic Foram δ18O (Zachos, et al., 2001) Click to enlarge (older is toward the bottom) .

Part of the problem is that it is unclear if atmospheric CO2 levels were significantly elevated 15 MYA.

tibetuplift.png?w=1204
Figure 3. Neogene-Quaternary temperature and carbon dioxide (older is toward the left). Click to enlarge.

We can see that estimates for 15 MYA range from 250 to 500 ppm. While there is some support for higher CO2 levels 20-22 MYA, when the Tibetan Uplift was accelerated, it does not coincide with the MMCO at 15 MYA.

We now have clean kills of the MMCO being driven by CO2 emissions from the Columbia River Basalt Group eruptions and the subsequent cooling being driven by a draw down of atmospheric CO2. How many clean kills does it take to kill a paradigm?

References

Berner, R.A. and Z. Kothavala, 2001. “GEOCARB III: A Revised Model of Atmospheric CO2 over Phanerozoic Time”.  American Journal of Science, v.301, pp.182-204, February 2001.

Dott, Robert H. & Roger L. Batten.  Evolution of the Earth.  McGraw-Hill, Inc.  Second Edition 1976.  p. 441.

Pagani, Mark, Michael Arthur & Katherine Freeman. (1999). “Miocene evolution of atmospheric carbon dioxide”. Paleoceanography. 14. 273-292. 10.1029/1999PA900006.

Royer, D. L., R. A. Berner, I. P. Montanez, N. J. Tabor and D. J. Beerling. “CO2 as a primary driver of Phanerozoic climate”.  GSA Today, Vol. 14, No. 3. (2004), pp. 4-10

Tripati, A.K., C.D. Roberts, and R.A. Eagle. 2009.  “Coupling of CO2 and Ice Sheet Stability Over Major Climate Transitions of the Last 20 Million Years”.  Science, Vol. 326, pp. 1394 1397, 4 December 2009.  DOI: 10.1126/science.1178296

Weimin Si & Yair Rosenthal. Reduced continental weathering and marine calcification linked to late Neogene decline in atmospheric CO2Nature Geoscience, 2019 DOI: 10.1038/s41561-019-0450-

Zachos, J. C., Pagani, M., Sloan, L. C., Thomas, E. & Billups, K. “Trends, rhythms, and aberrations in global climate 65 Ma to present”.  Science 292, 686–-693 (2001).

https://tinyurl.com/y4rcazw2

Link to comment
Share on other sites

The reality is simple! We’ve deforested and polluted so much that there’s no doubt man has contributed to climate change.....but to what degree is somewhat arguable. The basis of all this should be responsible management of our natural resources which includes a cleaner world.

Link to comment
Share on other sites

Deniers are going to deny.   Arguments about what happened in past geologic eras - which involve millions of years - are irrelevant to what has been happening since the industrial age, which began less than 300 years ago.

Hell, as AFTiger proves above, deniers will even deny what empirical evidence demonstrates - such as the fact of sea level rise:

Sea level observations between 1993 and November 2018

image.png

We are well into the anthropocene and the evidence is clear to those willing to accept the facts.

Fortunately - or unfortunately as the case my be - the proof is in the pudding. 

Accordingly, we will see a severe drop-off in the number of deniers over the next couple of decades as the already conclusive evidence becomes undeniable, even to those determined to reject the reality. 

By 2050 they will be equated to those who believed the earth was flat and the sun revolved around it.

 

 

Link to comment
Share on other sites

28 minutes ago, homersapien said:

By 2050 they will be equated to those who believed the earth was flat and the sun revolved around it

Or by 2050 the CAGW cult will be equated with Scientology. We'll see which turns out to be the truth.

Link to comment
Share on other sites

Quote

Peer review is fraught with problems, and we need a fix

by Andy Tattersall, The Conversation

Peer review is fraught with problems, and we need a fix
Where it begins. Credit: Nature

Dirty Harry once said, "Opinions are like a**holes; everybody has one". Now that the internet has made it easier than ever to share an unsolicited opinion, traditional methods of academic review are beginning to show their age.

 

We can now leave a public comment on just about anything – including the news, politics, YouTube videos, this article and even the meal we just ate. These comments can sometimes help consumers make more informed choices. In return, companies gain feedback on their products.

The idea was widely championed by Amazon, who have profited enormously from a mechanism which not only shows opinions on a particular product, but also lists items which other users ultimately bought. Comments and star-ratings should not always be taken at face value: Baywatch actor David Hasslehoff's CD "Looking for the Best" currently enjoys 1,027 five-star reviews, but it is hard to believe that the majority of these reviews are sincere. Take for instance this comment from user Sasha Kendricks: "If I could keep time in a bottle, I would use it only to listen to this glistening, steaming pile of wondrous music."

Anonymous online review can have a real and sometimes destructive effect on lives in the real world: a handful of bad Yelp reviews often spell doom for a restaurant or small business. Actively contesting negative or inaccurate reviews can lead to harmful publicity for a business, leaving no way out for business owners.

Academic peer review

Anonymous, independent review has been a core part of the academic research process for years. Prior to publication in any reputable journal, papers are anonymously assessed by the author's peers for originality, correct methodology, and suitability for the journal in question. Peer review is a gatekeeper system that aims to ensure that high-quality papers are published in an appropriate specialist journal. Unlike film and music reviews, academic peer review is supposed to be as objective as possible. While the clarity of writing and communication is an important factor, the novelty, consistency and correctness of the content are paramount, and a paper should not be rejected on the grounds that it is boring to read.

Once published, the quality of any particular piece of research is often measured by citations, that is, the number of times that a paper is formally mentioned in a later piece of published research. In theory, this aims to highlight how important, useful or interesting a previous piece of work is. More citations are usually better for the author, although that is not always the case.

 

Take, for instance, Andrew Wakefield's controversial paper on the association between the MMR jab and autism, published in leading medical journal The Lancet. This paper has received nearly two thousand citations – most authors would be thrilled to receive a hundred. However, the quality of Wakefield's research is not at all reflected by this large number. Many of these citations are a product of the storm of controversy surrounding the work, and are contained within papers which are critical of the methods used. Wakefield's research has now been robustly discredited, and the paper was retracted by the Lancet in 2010. Nevertheless, this extreme case highlights serious problems with judging a paper or an academic by number of citations.

More sophisticated metrics exist. The h-index, first proposed by physicist Jorge Hirsch, tries to account for both the quality and quantity of a scholar's output in a single number: a researcher who has published n papers, each of which has been cited n times, has an h-index of n. In order to achieve a high h-index, one cannot merely publish a large number of uninteresting papers, or a single extremely significant masterpiece.

The h-index is by no means perfect. For example, it does not capture the work of brilliant fledgling academics with a small number of papers. Recent research has examined a variety of alternative measures of scholarly output, "altmetrics", which use a much wider set of data including article views, downloads, and social media engagement.

Some critics argue that metrics based on tweets and likes might emphasise populist, attention-seeking articles over drier, more rigorous work. Despite this controversy, altmetrics offer real advantages for academics. They are typically much more fine-grained, providing a rich profile of the demographic who cite a particular piece of work. This system of open online feedback for academic papers is still in its infancy.

Nature journals recently started to provide authors with feedback on page-views and social media engagement, and sites such as Scirate allow Reddit-style voting on pre-print articles. However, traditional peer-reviewed journals and associated metrics such as impact factor, which broadly characterises the prestige associated with a particular journal, retain the hard-earned trust of funding organisations, and their power is likely to persist for some time.

Post-publication review

Post-publication review is a model with some potential. The idea is to get academics to review a paper after it has been published. This will remove the bottleneck that journals currently put up because editors are involved and peer review has to be done prior to publication.

But there are limitations. Academics are never short of opinions in their areas of expertise – it goes with the territory. Yet passing comment publicly on other people's research can be risky, and negative feedback could provoke a retaliation.

Post-publication review also has the potential for bias via preconceived judgements. One researcher may leave harsh comments on another's research based on the fact they do not like that person: rivalry in academia is not uncommon. Trolling on the web has become a serious problem in recent times, and it is not just the domain of the uneducated, bitter and twisted section, but is also enjoyed by members of society who are supposedly balanced, measured and intelligent.

One post-publication review platform, PubPeer, allows anonymous commenting – which, as seen with sites that allow for anonymous posts – could open the door for more trolling and abusive behaviour. It could offer reviewers an extra level of protection from what they say. One researcher recently filed a lawsuit over anonymous comments on PubPeer which they claim caused them to lose their job, after accusations of misconduct in their research. In a similar case, an academic claimed to have lost project funding after a reviewer complained about a blog post they had written about their project.

Post-publication comment can also be susceptible to manipulation and bias if not properly moderated. Even then, it is not easy to detect how honest and sincere someone is being over the Web. Recent stories featuring TripAdivisor and the independent health feedback website Patient Opinion show how rating and review systems can come into question. Nevertheless, research can possibly learn something from the likes of Amazon in how a long tail of research discoverability can be created. Comments and reviews may not always truly highlight how good a piece of research is, but they can help create a post-publication dialogue, a connectivity, globally about that topic of research, that in time sparks new ideas and publications.

Many now believe that long-standing metrics of academic research – peer review, citation-counting, impact factor – are reaching breaking point. But we are not yet in a position to place complete trust in the alternatives – altmetrics, open science, and post-publication review. But what is clear is that in order to measure the value of new measures of value, we need to try them out at scale.

 

Link to comment
Share on other sites

On 9/28/2019 at 10:36 AM, homersapien said:

I get it.  You don't believe in science. 

If that fraud you support is science, then you don't either.

Link to comment
Share on other sites

5 minutes ago, AFTiger said:

If that fraud you support is science, then you don't either.

So, in your mind, science is a fraud. 

Isn't that what I just said?  You don't believe in science.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.




×
×
  • Create New...