Monday, 26 November 2012
Renewable Energy getting to escape velocity?
So one of the main objections that the naysayers seem to have is that renewable energy is "too intermittent" and there is no way of overcoming this since the sun does not always shine and the wind does not always blow. In the case of wind it's often said by the naysayers that wind is uselss because you would need to have equivalent capacity of standby fossil-fueled plant in case of a 3-sigma event such as took place in Texas a few years ago where the wind failed for blow for 4 days straight.
Well maybe, perhaps.
BUT, and it's a very large but, the problem isn't one of intermittency per se because it can be solved by various means. Again, like "peak oil", it's one of expense. Is it cheaper to run on fossil fuel powered plant or is it cheaper to run on renewables?
The "no substitute" doomers point of view doesn't even come in to the picture. Quite clearly renewables are a substitute. The intermittency issue, however, is in fact one of expense rather than non-existent technology because you could, for example, distribute wind farms at large distances from each other and cancel out on average the non-wind-blowing days because it's far less likely (a 6-sigma event?) that the wind will fail to blow everywhere. Likewise, you could also "store" electricity in e.g. large groups of refrigerators (such as are found at port facilities) or else use pumped storage such as is used in hydro electric facilities.
Batteries don't even come in to it. Not, however, because they don't work. Clearly they work. It's cost. Nobody has considered using batteries for wind farms (or solar farms for that matter) because the cost per kW/h once you factor in up front costs, is prohibitive compared to fossil fuel powered electrical generation.
That, however, is on the cusp of being about to change. This: http://www.mhi.co.jp/en/news/story/1211221593.html
The executive summary of the above link is that mitsubishi heavy is investing in a pilot project using large format li-ion batteries as a backup for a wind farm on some islands in the North of Scotland.
Now what's interesting about this is that the wind farm already has a connector to the mainland where electricity is much cheaper. They are just doing this to store electricity from the overflow. That tends to suggest that they are closing in on it being cost-competitive to build wind farms with built in battery storage instead of using a connector link to the main grid.
That, if true, opens all sorts of interesting possibilities, not least a further reinforcement and debunking of dieoff.
Wednesday, 29 February 2012
Death by Desertification
Those who like to predict doom from "global warming" typically say that any increase in temperature will inevitably lead to crop failure, massive storms, more powerful storms, sea level rises, drought and desertification.
In fact in the greatest hothouse epoch of all geological time, the Eocene (caused by a superspike in greenhouse gases including carbon dioxide - and possibly a large pulse or many pulses of methan) where the temperature was as much as 20C in the arctic area, mid and high latitude regions were SIGNIFICANTLY wetter than today.
Here's the proof:
http://www.sciencemag.org/content/332/6028/455.abstract
"This increased offset could result from suppression of surface-water δ18O values by a tropical, annual moisture balance substantially wetter than that of today. Results from an atmospheric general circulation model support this interpretation and suggest that Eocene low latitudes were extremely wet."
And warm temperatures + wetter weather = greater productivity of plants.
Greater productivity of plants = higher crop yields.
Higher Crop Yields = larger sustainable human population.
So much for the doom from warming theory.
It's *cooling* we need to worry about.
Oops. Please try harder dear climate modeler "scientists". FAIL.
In fact in the greatest hothouse epoch of all geological time, the Eocene (caused by a superspike in greenhouse gases including carbon dioxide - and possibly a large pulse or many pulses of methan) where the temperature was as much as 20C in the arctic area, mid and high latitude regions were SIGNIFICANTLY wetter than today.
Here's the proof:
http://www.sciencemag.org/content/332/6028/455.abstract
"This increased offset could result from suppression of surface-water δ18O values by a tropical, annual moisture balance substantially wetter than that of today. Results from an atmospheric general circulation model support this interpretation and suggest that Eocene low latitudes were extremely wet."
And warm temperatures + wetter weather = greater productivity of plants.
Greater productivity of plants = higher crop yields.
Higher Crop Yields = larger sustainable human population.
So much for the doom from warming theory.
It's *cooling* we need to worry about.
Oops. Please try harder dear climate modeler "scientists". FAIL.
Thursday, 16 February 2012
Further Collapse of the economy due to the increased automation of jobs by software?
Some commentators reckon that many of us will end up put out of work by sofware in the near future and that thus all of the wealth will inevitably end up in the hands of a very few (say the famouse "1%). That sounds plausible but is very simplistic, *Marxist* and not at all taking into account the realities of the situation:
Even today most work is already done by computers.
I doubt that all of the jobs will be eliminated for a simple reason: the banks can't allow it to happen or they will collapse.
More likely we will continue with the scenario we have now of boom and bust and the average knowledge worker fitting the "I will pretend to work and you will pretend to pay me" scenario whereby
they are effectively thinking most of the time then pushing buttons for a small amount of time to produce products/deliverables.
Additionally unless we get strong AI, where is the basic research going to come from to produce new generations of products?
The "rich" can't simply gobble up all manufacturing capacity and that's all there is to the economy.
Even today the vast majority of the economy is in services. Some of that can be further automated but the thinking cannot yet be.
Many companies biggest resource (tired old cliche but still true) is their intellectual property generated by humans.
Are the rich so much smarter than the rest of us that they can generate *all* of the intellectual property all by themselves unassisted?
In the other extreme nightmare scenario where the rich don't give a shit about services and new intellectual property and are interested in shrinking
the economy down to automated manufacturing and high end services with no new intellectual property due to lack of scale (i.e. stagnation) and the rest of us are in blinding poverty:
I reckon that would be a recipe for revolution, never mind the fact that the aggregate economy would shrink and there would be less rich people.
It's not in the interests of the rich to put everyone out of work. We're a "resource" and the more productive we are the richer they are.
I suspect that instead we may just see more booms and busts instead as the only way to drive the fake economy and create jobs will be to print money
and force it through handfuls of pre-picked "winners" like the zombie companies of japan.
On the other hand one other super optimistic scenario might be in the absence of strong AI we simply incrementally upgrade the tools that each worker is using.
We're making the assumption that only university educated people can use automated software or hardware tools.
That's not even true today. Call centers use highly automated systems. Yes we may get to the stage that many of the "scripted" call centers could be
automated and that would thus throw people out of work, right? Well it comes down to a combination of trickle down economics combined with a zero sum game.
If the game is zero sum (i.e. no net new profit caused by further automation) then yes, when their jobs are automated out of existence there will be no new
jobs. In all likelihood, however, the profit margin of companies who use automated software tools instead of people will increase because otherwise why do it?
It doesn't make sense for a company to invest in automated software tools to do a job if humans are cheaper. So we can definitively say that profit margins will
increase. This will means the owners have more money to spend. Now here's a question: Do higher income people or business owners spend most of their money on a. manufactured product
or b. services? Here's a further question: do higher income people spend a higher or lower share of their income on *personal* services?
The answer is of course more. Therefore those displaced from low end service jobs will find themselves doing more personal services which are not automated.
On the other hand, what about the higher end jobs that currently require a university degree or significant training.
Surely some of those jobs will be eliminated by more intelligent automated tools?
Think again. If the tools are semi intelligent themselves then with adequate training even dummies will be able to operate them in all but the most limited set of circumstances since.
Instead the dummies will find themselves enfranchised in much the same way that high paying manufacturing jobs in the city raised the income of poor farm workers
who left the land to find work in factories and in places like Detroit they ended up being middle class instead of working class or poor.
Now you may be sceptical especially if you think that the economy is zero sum. In fact, the economy always has been about the growth of some sectors
of the economy and the collapse of others. That's because of the continual development of new products as scientific research advances.
There is currently massive change in China but apparent stagnation in the Western world. Western commentators seem to think that there
is a global problem of stagnation. There is not. There is a temporary imbalance whereby China and other "developing" countries are cheaper
because of labor and/or better more modern supply chains. That does not mean that the *global* economy is shrinking. On the contrary.
The real challenge facing us in the West is to develop new industries. New industries are based on new products and are created in a process
called "creative destruction" whereby the old non-competitive industries collapse and are replaced by the new. Horse drawn carriages were replaced
by automobiles. How many people can name the most successful manufacturer of horse drawn carriages in the 19th century? Not many. It's gone.
But GM, Ford, Nissan et cetera are here and are shortly to be in fierce competition with Chinese competitors who may or may not be more effective than they are.
Likewise, the likes of Bell and AT&T etc have had to contend with the rise of cellphones and cable companies have had to contend with the rise of the internet.
Telegraph companies had to contend with the telephone before them et cetera et cetera.
Right now the two main risks to the economy growing are nothing to do with the automation of jobs by information enhanced software tools.
Those two risks are the debt overhang from the housing bubble in the western (and especially English speaking) countries as well as shifting growing demand
for transporation away from oil as conventional oil supplies peak and start to decline.
In all cases, the problems need to be solved by *more* innovative products which in turn will generate new industries which we desperately need in order
to replace the industries we cannot compete with against rising stars like China.
Where will these new products come from?
Basic research and design.
In fact in an abstract sense the core of the economy itself *is* is R & D and *everything* else is support for R & D. Let's take a look at how that part of the economy
will be affected by increasing automation:
Those of us who are university educated are in no danger of being automated out of existence, we'll simply be using more and more and more powerful tools and adding much more value
than we currently are. Take a university researcher for example. One researcher is useful, but much of the work is currently spent searching literature and collating and correlating
the existing research. The internet has now enabled lay persons to do the same thing with the use of google and simply doing a mathematical combination of all the relevant keywords.
A layperson could not understand all of the texts thrown up by the correlated keyword searches but could certainly cut down the amount of time spent on this by said researcher.
If that job were automated further so that some semi intelligent tool could correlate successfully and pull up everything specific to what the researcher is looking for,
then they could spend more of their time on the experimentation. The experimentation itself can be speeded up significantly also by automation.
So does that put the researcher out of business? On the contrary. It increases productivity *massively* and the pace of scientific progress will jump by orders of magnitude.
Now could we somehow find some basic ways to plug in laypersons to this process? Indeed we can and we have. There is a game that currently exists for folding proteins.
Even today most work is already done by computers.
I doubt that all of the jobs will be eliminated for a simple reason: the banks can't allow it to happen or they will collapse.
More likely we will continue with the scenario we have now of boom and bust and the average knowledge worker fitting the "I will pretend to work and you will pretend to pay me" scenario whereby
they are effectively thinking most of the time then pushing buttons for a small amount of time to produce products/deliverables.
Additionally unless we get strong AI, where is the basic research going to come from to produce new generations of products?
The "rich" can't simply gobble up all manufacturing capacity and that's all there is to the economy.
Even today the vast majority of the economy is in services. Some of that can be further automated but the thinking cannot yet be.
Many companies biggest resource (tired old cliche but still true) is their intellectual property generated by humans.
Are the rich so much smarter than the rest of us that they can generate *all* of the intellectual property all by themselves unassisted?
In the other extreme nightmare scenario where the rich don't give a shit about services and new intellectual property and are interested in shrinking
the economy down to automated manufacturing and high end services with no new intellectual property due to lack of scale (i.e. stagnation) and the rest of us are in blinding poverty:
I reckon that would be a recipe for revolution, never mind the fact that the aggregate economy would shrink and there would be less rich people.
It's not in the interests of the rich to put everyone out of work. We're a "resource" and the more productive we are the richer they are.
I suspect that instead we may just see more booms and busts instead as the only way to drive the fake economy and create jobs will be to print money
and force it through handfuls of pre-picked "winners" like the zombie companies of japan.
On the other hand one other super optimistic scenario might be in the absence of strong AI we simply incrementally upgrade the tools that each worker is using.
We're making the assumption that only university educated people can use automated software or hardware tools.
That's not even true today. Call centers use highly automated systems. Yes we may get to the stage that many of the "scripted" call centers could be
automated and that would thus throw people out of work, right? Well it comes down to a combination of trickle down economics combined with a zero sum game.
If the game is zero sum (i.e. no net new profit caused by further automation) then yes, when their jobs are automated out of existence there will be no new
jobs. In all likelihood, however, the profit margin of companies who use automated software tools instead of people will increase because otherwise why do it?
It doesn't make sense for a company to invest in automated software tools to do a job if humans are cheaper. So we can definitively say that profit margins will
increase. This will means the owners have more money to spend. Now here's a question: Do higher income people or business owners spend most of their money on a. manufactured product
or b. services? Here's a further question: do higher income people spend a higher or lower share of their income on *personal* services?
The answer is of course more. Therefore those displaced from low end service jobs will find themselves doing more personal services which are not automated.
On the other hand, what about the higher end jobs that currently require a university degree or significant training.
Surely some of those jobs will be eliminated by more intelligent automated tools?
Think again. If the tools are semi intelligent themselves then with adequate training even dummies will be able to operate them in all but the most limited set of circumstances since.
Instead the dummies will find themselves enfranchised in much the same way that high paying manufacturing jobs in the city raised the income of poor farm workers
who left the land to find work in factories and in places like Detroit they ended up being middle class instead of working class or poor.
Now you may be sceptical especially if you think that the economy is zero sum. In fact, the economy always has been about the growth of some sectors
of the economy and the collapse of others. That's because of the continual development of new products as scientific research advances.
There is currently massive change in China but apparent stagnation in the Western world. Western commentators seem to think that there
is a global problem of stagnation. There is not. There is a temporary imbalance whereby China and other "developing" countries are cheaper
because of labor and/or better more modern supply chains. That does not mean that the *global* economy is shrinking. On the contrary.
The real challenge facing us in the West is to develop new industries. New industries are based on new products and are created in a process
called "creative destruction" whereby the old non-competitive industries collapse and are replaced by the new. Horse drawn carriages were replaced
by automobiles. How many people can name the most successful manufacturer of horse drawn carriages in the 19th century? Not many. It's gone.
But GM, Ford, Nissan et cetera are here and are shortly to be in fierce competition with Chinese competitors who may or may not be more effective than they are.
Likewise, the likes of Bell and AT&T etc have had to contend with the rise of cellphones and cable companies have had to contend with the rise of the internet.
Telegraph companies had to contend with the telephone before them et cetera et cetera.
Right now the two main risks to the economy growing are nothing to do with the automation of jobs by information enhanced software tools.
Those two risks are the debt overhang from the housing bubble in the western (and especially English speaking) countries as well as shifting growing demand
for transporation away from oil as conventional oil supplies peak and start to decline.
In all cases, the problems need to be solved by *more* innovative products which in turn will generate new industries which we desperately need in order
to replace the industries we cannot compete with against rising stars like China.
Where will these new products come from?
Basic research and design.
In fact in an abstract sense the core of the economy itself *is* is R & D and *everything* else is support for R & D. Let's take a look at how that part of the economy
will be affected by increasing automation:
Those of us who are university educated are in no danger of being automated out of existence, we'll simply be using more and more and more powerful tools and adding much more value
than we currently are. Take a university researcher for example. One researcher is useful, but much of the work is currently spent searching literature and collating and correlating
the existing research. The internet has now enabled lay persons to do the same thing with the use of google and simply doing a mathematical combination of all the relevant keywords.
A layperson could not understand all of the texts thrown up by the correlated keyword searches but could certainly cut down the amount of time spent on this by said researcher.
If that job were automated further so that some semi intelligent tool could correlate successfully and pull up everything specific to what the researcher is looking for,
then they could spend more of their time on the experimentation. The experimentation itself can be speeded up significantly also by automation.
So does that put the researcher out of business? On the contrary. It increases productivity *massively* and the pace of scientific progress will jump by orders of magnitude.
Now could we somehow find some basic ways to plug in laypersons to this process? Indeed we can and we have. There is a game that currently exists for folding proteins.
Sunday, 20 November 2011
More battery breakthroughs
So things are getting interesting in the battery arena.
Right now we have batteries for electric cars that cost around $20,000 for 150 mile range, which while it will avert a collapse in the transportation and logistics network if it is all we have (it's not), the price is currently so high that it's not *very* competitive with lower end internal combustion based engines.
We really need at a minimum, double the range and half the price to bring costs and utility to a level at which the average buyer of today's vehicles will purchase them on mass.
Even better, obviously would be a battery which has three times the range and half the cost or less of today's batteries.
Well as it happens, the batteries we are using today are 2003's technology. While it may be frustrating to watch that it's taking something like 9 years to produce batteries with adequate range and pricing, compared with information technology which improves by 2X it's performance every 18 months, we are nearly there. As I make it there are now four viable improved battery technologies in the lab at a pre-production stage. There are, in fact, more than four promised technologies but if we put faith in batteries promised by large organizations with the funding and the process, engineering and production capacity to actually bring the technology to market in any kind of meaningful way then there are four.
They are:
IBM's battery 500 which is the result of millions of hours of supercomputer advanced simulation of different chemistries of anodes and cathodes for the holy grail of lithium batteris: lithium air. If this battery is real then it will have a 500 mile range for the same cost as today's batteries. Definitely adequate. On an off topic note, I wish there was a paper somewhere explaining how their model worked, because the way they rapidly scanned over 20 million chemicals make me suspect AI was somehow involved and that would be even bigger news than just a new battery. IBM promises to have a prototype ready by 2013 and if successful, it hopes a battery manufacturer will license the technology and be in production by 2020.
Toshiba's SCiB lithium titanate battery with double the range and the same cost, coming to market in 2013.
Nissan has developed a new better anode in it's battery which it uses in the Nissan leaf which currently has a 100 mile range. It plans to release these new batteries with double the range in 2015.
Altairnano, LG Chem and A123 Systems all have a variety of more efficient cathode's for more advanced lithium ion batteries with lower costs.
It's also worth pointing out that there are also magnesium chemistry batteries in the lab as well as Iron phosphate lithium batteries and various other technologies being worked on that are further away from the market.
Now a corrollary to the battery advances and cost reductions is the reduction in intermittency of renewable sources of electricity such as wind and solar. The problem of intermittency isn't really a problem of technology, since we already have technical solutions to these problems: geographic dispersement of wind farms, hydro storage, compressed air, vanadium flow batteries etc etc, it's really a problem of price (same as with electric cars). If prices of batteries become low enough that they can be added on existing electric renewable infrastructure with no large scale increase in price to the consumer it will be a no brainer to do so.
Right now we have batteries for electric cars that cost around $20,000 for 150 mile range, which while it will avert a collapse in the transportation and logistics network if it is all we have (it's not), the price is currently so high that it's not *very* competitive with lower end internal combustion based engines.
We really need at a minimum, double the range and half the price to bring costs and utility to a level at which the average buyer of today's vehicles will purchase them on mass.
Even better, obviously would be a battery which has three times the range and half the cost or less of today's batteries.
Well as it happens, the batteries we are using today are 2003's technology. While it may be frustrating to watch that it's taking something like 9 years to produce batteries with adequate range and pricing, compared with information technology which improves by 2X it's performance every 18 months, we are nearly there. As I make it there are now four viable improved battery technologies in the lab at a pre-production stage. There are, in fact, more than four promised technologies but if we put faith in batteries promised by large organizations with the funding and the process, engineering and production capacity to actually bring the technology to market in any kind of meaningful way then there are four.
They are:
IBM's battery 500 which is the result of millions of hours of supercomputer advanced simulation of different chemistries of anodes and cathodes for the holy grail of lithium batteris: lithium air. If this battery is real then it will have a 500 mile range for the same cost as today's batteries. Definitely adequate. On an off topic note, I wish there was a paper somewhere explaining how their model worked, because the way they rapidly scanned over 20 million chemicals make me suspect AI was somehow involved and that would be even bigger news than just a new battery. IBM promises to have a prototype ready by 2013 and if successful, it hopes a battery manufacturer will license the technology and be in production by 2020.
Toshiba's SCiB lithium titanate battery with double the range and the same cost, coming to market in 2013.
Nissan has developed a new better anode in it's battery which it uses in the Nissan leaf which currently has a 100 mile range. It plans to release these new batteries with double the range in 2015.
Altairnano, LG Chem and A123 Systems all have a variety of more efficient cathode's for more advanced lithium ion batteries with lower costs.
It's also worth pointing out that there are also magnesium chemistry batteries in the lab as well as Iron phosphate lithium batteries and various other technologies being worked on that are further away from the market.
Now a corrollary to the battery advances and cost reductions is the reduction in intermittency of renewable sources of electricity such as wind and solar. The problem of intermittency isn't really a problem of technology, since we already have technical solutions to these problems: geographic dispersement of wind farms, hydro storage, compressed air, vanadium flow batteries etc etc, it's really a problem of price (same as with electric cars). If prices of batteries become low enough that they can be added on existing electric renewable infrastructure with no large scale increase in price to the consumer it will be a no brainer to do so.
Wednesday, 9 November 2011
Death by Carbon Dioxide?
The global warming models propounded by the climate change scaremongers suggest warming of a "dangerous" level of 4-8C. Quite why an *average* temperature increase across the whole planet over a whole year including both daytime and night-time temperatures of that level is quite so scary escapes me. I'd like to see more granular data explaining why such and such an increase over a smaller region would be catastrophic over a smaller timescale for example.
But I'm not going to look at that today. Instead I'm going to look at the *science*.
If we examine the actual math, the scientific equation for absorbtion/emissivity by carbon dioxide produces three salient facts.
1. It's about a ONE degree increase in temperature per DOUBLING of carbon dioxide
2. All things being equal absorbtion and emissivity are roughly in balance. The more radiation coming in, the higher the emissivity. End result is it should be a wash.
3. Increases in temperature are *instant* if you double carbon dioxide. There is *no* lag.
So what gives?
Well the climate "scientists" are quoting temperature increases of much higher than one degree and absorbtion/emissivity model says it should be a wash so that means that the higher temperature increases are due to something else instead of carbon dioxide since carbon dioxide only leads to an increase of one measly degree per doubling.
At one degree per doubling that means going from pre-industrial times i.e. 200 parts per million we should see a one degree increase to 400 parts per million and a two degree increase to 800 parts per million and a three degree increase to 1600 parts per million and a four degree increase to 3200 parts per million and a five degree increase to 6400 parts per million and a six degree increase to a 12800 parts per million.
We have to go to ridiculous volumes of carbon dioxide to get to the high numbers proposed by the climate "scientists".
So what can possibly be causing it since we *have* seen an increase in temperature? (Although it has to be said that the observed increase in temperature is not as high as the scary climate models propounded by the scaremongers).
Well in order to get to "scary" levels of temperature increase there has to be a lag effect since the observed temperature increase hasn't corresponded with scary temperature increases.
Also we have to have significant positive feedback effects such as the melting of the ice sheets and the reduction of forest cover.
Now we can definitively say that both ice sheets and forest cover have decreased and that both of these are positive feedback effects thus increasing the temperature increase we would normally see above and beyond the temperature increase of one degree per doubling of carbon dioxide on it's own.
Additional negative feedbacks are cloud cover and smoke/aerosols, with increased cloud cover tending to decrease temperature and smoke/aerosols tending to crease temperature.
Putative positive feedbacks increasing warming include methane gas increases.
Now the observable facts are these:
Ice cover has decreased. Forest cover has decreased. Cloud cover has decreased. Fossil fuel burning has increased. Carbon dioxide emissions have increased. Smoke and aerosol emissions have increased.
What can we speculate from this?
Decreasing ice cover should lead to increased temperature increases over and above carbon dioxide emissions.
Decreasing forest cover should lead to increased temperature increases over and above carbon-dioxide emissions.
Fossil fuel burning will increase both carbon dioxide and smoke and aerosol.
Carbon dioxide increases should lead to a one degree increase in temperature per doubling (which is piffling little as shown above compared to actual concentrations observed).
Smoke and aerosol increases should have lead to a lowering of temperature below what has been observed.
So to explain any not observable putative future temperature increases over and above the one degree per doubling as well as the not-observed increase over one degree by the observed increase in carbon dioxide increases we have to invoke very large feedback effects.
i.e. increase the melting of ice cover by a large amount and increase the amount of deforestation and invoke possibilities such as the release of methane from methane clathrates on the ocean floor.
But here's the rub: once *all* the ice has melted and the entire planet has been converted to agriculture by removing *all* of the forests there's *no more* possible positive feedbacks from those two drivers. Likewise methane persists in the atmosphere for only a handful of years and though it's a *much* more powerful greenhouse gas than carbon dioxide, once the methane degrades into carbon dioxide the amount of warming is again limited to one degree per doubling. Not really a substantial amount.
So we're left with clouds.
So to pin the blame on carbon dioxide we have to invoke a huge positive feedback by showing that increasing carbon dioxide increases cloud cover which increases temperature.
Unfortunately the data goes in the opposite direction. Increasing cloud cover results in a cooler world, not a warmer one.
But *yet* the temperatures have increased and carbon dioxide has also increased (although by not as much as the scary models which include the ridiculous positive feedbacks). So what gives?
Cloud cover has actually decreased.
But that doesn't make sense if it's carbon dioxide that's driving it.
In fact, it's *not* carbon dioxide that's driving it though it *is* man-made emissions that are driving it.
It's *smoke*.
Now here's an interesting fact: millions of years ago there were massive eruptions called the "eruption of the deccan traps" which released a shit-load of carbon dioxide into the atmosphere probably because the deccan traps were sitting on top of huge coal deposits. But here's the rub: although temperatures increased by a whopping amount (12C or thereabouts and then quickly leveled out to about a 6C increase), carbon dioxide alone could not have possibly done that. Even if you invoke a pulse effect melting the methane clathrates then you'd have had at *most* a temporary spike so there should only have been the 6C increase. But the data show otherwise. Looks like there might have been something else. I suspect it's smoke.
Getting back to present times:
If we remove smoke from the picture and increase carbon dioxide we should see an increase in temperature MEDIATED BY an increase in cloud cover.
But we don't see that. Instead we see decreased cloud cover and a temperature increase *exactly* predicted by the increase of carbon dioxide. So the putative predicted temperature increase by the climate change scaremongers is due to alleged positive feedbacks.
Now we've already shown that there's a limit to the duration of the positive feedbacks so they can't generate *possibly* generate a "runaway greenhouse effect". We've also shown that carbon dioxide emissions by themselves should only create a one degree increase per doubling AND that should be mediated by increased cloud cover. Once the positive feedback mechanisms of melting the icecaps and deforestation have done their job we should only see a one degree increase from then on per doubling and the huge volume of carbon dioxide emissions required to get to multiple doublings of emissions is absolutely staggering.
In other words in order to go for a horror scenario the only possible blame we can pin on emissions is that of smoke and aerosols. Smoke and aerosols are what lead to reduced cloud cover. If we continue to increase our burning of fossil fuels we will continue to increase our smoke/aerosol emissions and *that* will amplify any increases in temperature above one degree in any continuing way.
So what we ought to do is not limit emissions per se, if we want to decrease temperature increases to only one degree per doubling we have to reduce smoke/aerosol emissions.
But we're not seeing that as a position by the greenies. Instead we're seeing an attack on multiple levels against all forms of industrial activity on a large scale. But that position isn't justified by the effects of increased carbon dioxide emissions by themselves and if we remove smoke/aerosols from the equation then we need to increase our carbon dioxide emissions by an unfeasibly massive amount in order to get to so called "scary" temperature increases.
So what gives?
The actual position propounded by you greenies is not based on a desire to limit carbon dioxide emissions per se. Instead it's based on limiting interference with the global ecosystem by man made means and from that angle, *everything* is being attacked from the burning of fuels, to agriculture, to extractive mining, to transport of products and/or the transport of people in order to allow the ecosystem to return to a natural state with no interference by humankind.
So basically if you're in favor of human dieoff, let's put the greenies, the druids and the ecologists in charge.
But I'm not going to look at that today. Instead I'm going to look at the *science*.
If we examine the actual math, the scientific equation for absorbtion/emissivity by carbon dioxide produces three salient facts.
1. It's about a ONE degree increase in temperature per DOUBLING of carbon dioxide
2. All things being equal absorbtion and emissivity are roughly in balance. The more radiation coming in, the higher the emissivity. End result is it should be a wash.
3. Increases in temperature are *instant* if you double carbon dioxide. There is *no* lag.
So what gives?
Well the climate "scientists" are quoting temperature increases of much higher than one degree and absorbtion/emissivity model says it should be a wash so that means that the higher temperature increases are due to something else instead of carbon dioxide since carbon dioxide only leads to an increase of one measly degree per doubling.
At one degree per doubling that means going from pre-industrial times i.e. 200 parts per million we should see a one degree increase to 400 parts per million and a two degree increase to 800 parts per million and a three degree increase to 1600 parts per million and a four degree increase to 3200 parts per million and a five degree increase to 6400 parts per million and a six degree increase to a 12800 parts per million.
We have to go to ridiculous volumes of carbon dioxide to get to the high numbers proposed by the climate "scientists".
So what can possibly be causing it since we *have* seen an increase in temperature? (Although it has to be said that the observed increase in temperature is not as high as the scary climate models propounded by the scaremongers).
Well in order to get to "scary" levels of temperature increase there has to be a lag effect since the observed temperature increase hasn't corresponded with scary temperature increases.
Also we have to have significant positive feedback effects such as the melting of the ice sheets and the reduction of forest cover.
Now we can definitively say that both ice sheets and forest cover have decreased and that both of these are positive feedback effects thus increasing the temperature increase we would normally see above and beyond the temperature increase of one degree per doubling of carbon dioxide on it's own.
Additional negative feedbacks are cloud cover and smoke/aerosols, with increased cloud cover tending to decrease temperature and smoke/aerosols tending to crease temperature.
Putative positive feedbacks increasing warming include methane gas increases.
Now the observable facts are these:
Ice cover has decreased. Forest cover has decreased. Cloud cover has decreased. Fossil fuel burning has increased. Carbon dioxide emissions have increased. Smoke and aerosol emissions have increased.
What can we speculate from this?
Decreasing ice cover should lead to increased temperature increases over and above carbon dioxide emissions.
Decreasing forest cover should lead to increased temperature increases over and above carbon-dioxide emissions.
Fossil fuel burning will increase both carbon dioxide and smoke and aerosol.
Carbon dioxide increases should lead to a one degree increase in temperature per doubling (which is piffling little as shown above compared to actual concentrations observed).
Smoke and aerosol increases should have lead to a lowering of temperature below what has been observed.
So to explain any not observable putative future temperature increases over and above the one degree per doubling as well as the not-observed increase over one degree by the observed increase in carbon dioxide increases we have to invoke very large feedback effects.
i.e. increase the melting of ice cover by a large amount and increase the amount of deforestation and invoke possibilities such as the release of methane from methane clathrates on the ocean floor.
But here's the rub: once *all* the ice has melted and the entire planet has been converted to agriculture by removing *all* of the forests there's *no more* possible positive feedbacks from those two drivers. Likewise methane persists in the atmosphere for only a handful of years and though it's a *much* more powerful greenhouse gas than carbon dioxide, once the methane degrades into carbon dioxide the amount of warming is again limited to one degree per doubling. Not really a substantial amount.
So we're left with clouds.
So to pin the blame on carbon dioxide we have to invoke a huge positive feedback by showing that increasing carbon dioxide increases cloud cover which increases temperature.
Unfortunately the data goes in the opposite direction. Increasing cloud cover results in a cooler world, not a warmer one.
But *yet* the temperatures have increased and carbon dioxide has also increased (although by not as much as the scary models which include the ridiculous positive feedbacks). So what gives?
Cloud cover has actually decreased.
But that doesn't make sense if it's carbon dioxide that's driving it.
In fact, it's *not* carbon dioxide that's driving it though it *is* man-made emissions that are driving it.
It's *smoke*.
Now here's an interesting fact: millions of years ago there were massive eruptions called the "eruption of the deccan traps" which released a shit-load of carbon dioxide into the atmosphere probably because the deccan traps were sitting on top of huge coal deposits. But here's the rub: although temperatures increased by a whopping amount (12C or thereabouts and then quickly leveled out to about a 6C increase), carbon dioxide alone could not have possibly done that. Even if you invoke a pulse effect melting the methane clathrates then you'd have had at *most* a temporary spike so there should only have been the 6C increase. But the data show otherwise. Looks like there might have been something else. I suspect it's smoke.
Getting back to present times:
If we remove smoke from the picture and increase carbon dioxide we should see an increase in temperature MEDIATED BY an increase in cloud cover.
But we don't see that. Instead we see decreased cloud cover and a temperature increase *exactly* predicted by the increase of carbon dioxide. So the putative predicted temperature increase by the climate change scaremongers is due to alleged positive feedbacks.
Now we've already shown that there's a limit to the duration of the positive feedbacks so they can't generate *possibly* generate a "runaway greenhouse effect". We've also shown that carbon dioxide emissions by themselves should only create a one degree increase per doubling AND that should be mediated by increased cloud cover. Once the positive feedback mechanisms of melting the icecaps and deforestation have done their job we should only see a one degree increase from then on per doubling and the huge volume of carbon dioxide emissions required to get to multiple doublings of emissions is absolutely staggering.
In other words in order to go for a horror scenario the only possible blame we can pin on emissions is that of smoke and aerosols. Smoke and aerosols are what lead to reduced cloud cover. If we continue to increase our burning of fossil fuels we will continue to increase our smoke/aerosol emissions and *that* will amplify any increases in temperature above one degree in any continuing way.
So what we ought to do is not limit emissions per se, if we want to decrease temperature increases to only one degree per doubling we have to reduce smoke/aerosol emissions.
But we're not seeing that as a position by the greenies. Instead we're seeing an attack on multiple levels against all forms of industrial activity on a large scale. But that position isn't justified by the effects of increased carbon dioxide emissions by themselves and if we remove smoke/aerosols from the equation then we need to increase our carbon dioxide emissions by an unfeasibly massive amount in order to get to so called "scary" temperature increases.
So what gives?
The actual position propounded by you greenies is not based on a desire to limit carbon dioxide emissions per se. Instead it's based on limiting interference with the global ecosystem by man made means and from that angle, *everything* is being attacked from the burning of fuels, to agriculture, to extractive mining, to transport of products and/or the transport of people in order to allow the ecosystem to return to a natural state with no interference by humankind.
So basically if you're in favor of human dieoff, let's put the greenies, the druids and the ecologists in charge.
Labels:
dieoff,
Dieoff debunked,
Global Warming,
Peak Oil Debunked
Monday, 31 October 2011
Somewhat off topic almost science fiction like dieoff scenarios
So I've revisited the carter catastrophe using the mediocrity principle and also looked at a possible extinction event with relevance to us (competition from an equally intelligent competitor species).
Life has existed for approximately 3.5 billion years and the sun has existed for approximately 4.5 billion years.
Since we have no evidence of life other than our own and our sun has existed for only 4.5 billion years then by the principle of mediocrity we have to say that every newborn star in the galaxy that fits the category of our sun (i.e. G class stars) will develop life after 1 billion years and life will continue once it has gotten started in spite of extinction events and that there shall be at least five extinction events during the 3.5 billion years of life.
What can we say about the species existing? Well nothing much during the entire period because we don't have the data but we can say that right now we have about 10 million species right now and approximately 1 in 100,000 species are what can be classified as "living fossils" or in other words species that have survived a significant percentage of the time that animal life has existed. Some of these living fossils have been around for 450 million years and it's therefore arguable that they have been around since the beginning of vertebrate life. Thus we can say that some species have existed unchanged and have not gone extinct in the entire period that animal life has been around and have passed through 5 major extinction events. That said since we have no examples of living fossils longer than the period of animal life (500 million years) we cannot argue that they last any longer than 500 million years. We have to say that's the top line.
If we accept the principle of mediocrity for species we can therefore say that 0.001 percent of all species will be long lived during the 4.5 billion years after a star forms. According to the principle of mediocrity we also have to say that 1 in 100,000 of all *intelligent* species will be long lived. Intelligent species are us. Only one species. 1 in 10 million. So one in 100,000 x 10 million intelligent species should therefore last 500 million years and the rest will last the normal period of time at the most which is 2 million years for a species.
Now 100,000 x 10 million makes 100,000,000,000 is 100 billion. Which gives us an answer that there are *no* long lived intelligent species in our galaxy and in fact there is only 1 long lived species in 100 galaxies. But let's ignore that inconvenient fact for now.
Since we have no evidence of any intelligent species that last 500 million years unless it is us (and we're not there yet) then we have to leave the putative long lived species out of the picture. Even though right there we have resolved the fermi paradox.
Now the interesting thing is this: intelligence clearly isn't normal because it's only 1 in 10 million species. So can we apply the principle of mediocrity to it? Hard to say, but let's say that we can at least with regards to *other* potential intelligent signal transmitting species.
We need big brains, hands, and communication in order to generate human like intelligence (i.e. a species that can send signals. We have to exclude the possibility that stellar travel is possible because we haven't done it but since we have sent signals then by the principle of mediocrity so can other intelligent species like ourselves). How common are the conditions that lead to the development of intelligence? Big brains obviously, but that's not all. Whales, Elephants and Dolphins all have big brains and though it can be argued that they communicate, that's not enough. We need coordinated communication, big brains and tool making. Tool making requires hands. There are several species with hands but only us with big brains.
Coordinated communication exists in wolf packs and other pack animals. Coordinated communication also exists in herds of prey animals. So we can argue that the three things that together can lead to an intelligent species that can make tools are fairly common. We can likewise argue that the thing required to kickstart civilization in addition to tool making, coordinated communication and big brains is agriculture. There are several species that do this such as ants among others and if you broaden the definition to symbiosis there are many many species. So the conditions for the development of agriculture by tool making, big brained coordinated communication intelligent creatures are common. It just takes a long time. Not till the last 10,000 years. We can also say that it's only during the last 100 years of a 4.5 billion period since a new star formed that a species can send a signal. We can also say that it can't possibly have happened in the *first* 4.5 billion year period of the universe since all stars in that period were only type I stars (i.e. no heavy elements surrounding them) and we have to wait until type II stars are formed. So that cuts out the possibility of a long lived species appearing in the first part of the history of the universe. Now the first type II stars formed around 9 billion years ago so that means that in our galactic space we ought to have had at least two intelligent species that lasted 500 million years within the sphere of the nearest 200 galaxies. But we're talking GALAXIES!!! It's one thing to meet the ridiculous challenge of interstellar colonization and quite another to meet the challenge of intergalactic. Never mind that we have to assume that the two putative civilizations *stayed civilized* during the entire period *and* coincided with us. Likely? I think not.
But lets get back to signals from our putative signalling civilization.
What we can't say is whether such a signal will be received or indeed whether it can even be understood over stellar distances because although we have been sending signals for some 100 years via radio waves we haven't in fact received anything, so it may in fact be impossible to receive signals stellar distances via our level of technology.
But we can't say anything about that because we have no evidence so instead looking at the evidence (i.e. we can SEND signals and nothing about whether they can be received) let's ask the question how many species can send signals?
About 1 in 13 stars in the Galaxy are G Class stars which are the same type of star as the sun.
Since the principle of mediocrity demands that we aren't special we have to say therefore that all G type stars in the galaxy have life and the capacity to generate intelligent species.
That is to say there are 100 billion stars in the galaxy approximately 7 billion of which are G class. If G class stars are evenly distributed then since the average distance between stars is about 5 light years then that means the average distance between civilizations like ours ought to be 5x13 = 65 light years.
In our sphere shaped region of space we will have thousands of stars capable of signalling us at
the current time if the principle of mediocrity is true. So either we're not listening or
we can't hear or there are no signals.
But... the ingredients for life seem to be pretty common all throughout the universe so that's
probably not it. And if life gets started we have to assume that it will eventually lead
to something like us, so something else must be happening...
If we argue that over the last million years there has been at least one other contender intelligent species but only we discovered agriculture then that pushes the light cone out to 130 light years. There we have one possible explanation for why we haven't received any signals yet if they are understandable or receivable over stellar distances: the signal hasn't reached us yet.
So that's fairly straightforward. We're going to receive signals sometime in the next 50 years if we're not unique and we are capable of receiving or understanding the signal at our current level of technology.
So now let's look and see if we can determine how long we're going to last.
We can conjecture that we have about a 1 in 100,000 chance of surviving 500 million years
and 99,999 chance of surviving less than that. Australopithecines lasted about 2 million years
so we can argue that we have an evens chance of lasting 2 million years. Homo Sapiens has been
around for about 200,000 years and neanderthals also lasted about 200,000 years.
So we can say we have close to evens chance of lasting 200,000.
So our time could be up about now or else we have about a 49% chance of lasting another 1.8 million years and about a 1 in 100,000,000,000 chance of surviving 500 million years.
So most species around us last somewhere from 200,000 to 2 million years and hardly anyone lasts 500 million years. We have to rule out the possibility that any species lasts 4.5 billion years which are our time chunks for the formation of intelligent species from single celled life beginning to end.
So which is it? How long do *we* last.
Unfortunately we can't know without receiving any more signals. If we don't receive *any* signals within the next 65 years that would indicate that civilizations that are capable of signalling other civilizations are rarer than can be predicted just from obverving us which in turn says that we are missing something in our predictions from the principle of mediocrity. On the other hand, if we receive signals from *all* of the predicted civilizations then we can fairly confidently say that in a sphere of space containig 100,000,000,000 civilizations, one of them will be composed of a species that will last for 500 million years. It doesn't, however, say that during that 500 milllion years it will maintain the ability to signal for that entire length of time, just that there should be one long lived species.
It's also interesting to note that dinosaur killer events take place approximately 700 million years so it's quite possible that a long lived species will be born, life it's life and then die out in between those large extinction events, so large extinction events might not be what put an end to it.
In the case of the short lived species (200,000 to 2 million years) it's even less likely that
a dinosaur killer event would put an end to them because they simply don't happen often enough
and if there are lots in a volume of space then the small chance of an extinction event taking
place during the 2 million years for all of them is vanishingly small.
What *can* we say?
Well during the last 200,000 years there were at least two and potentially up to five competing
intelligent species in the same spot and only one of us survived so we can say that in a 200,000
year space an intelligent species has about a 50% to an 80% chance of being outcompeted by another intelligent species and about a 50% to a 20% chance of surving the competition.
Since we can rule out extinction events such as dinosaur killers doing us in and we can rule out visitation by star faring aliens we have to assume that it's competition from another intelligent species right here on Earth. We do in fact have a candidate: sufficiently intelligent machines. So let's look at that.
If we then take the roughly 49.999999% chance of being wiped out right now multiplied by 20-50%
we have thus about 10-25% chance of surving competition by another species plus 49.999999% of surviving at least 1.8 million years and some small fraction of surving 500 million years then it's at worst about a 40 per cent chance of extinction with a 60% survival likelihood and at best 75% of surving *if* we are the better fitted to the conditions.
Now that's where it gets interesting.
Since the only example we have is of a fitter species outcompeting less fit species, what's so
special about us? Are we more violent, more cooperative, simply better at acquiring resources or what?
In fact in the case of the neanderthals and the denisovans they have left 6% of their DNA in us and somewhat around 10% in total including
other species. So statistically if it's us that get outcompeted it looks like we don't get wiped out, we get absorbed.
Since what we're most likely to be facing in the near future in terms of competition is that from our machines and
more specifically competition from intelligent machines we can argue that if we are to go extinct we probably will hold out for quite
some time and eventually get absorbed by our machines which will be somewhat like us and have some
of us in them and thus we can probably safely rule out an extinction event predicated by an unfriendly AI
in a hard takeoff scenario, though we cannot rule out partial extinction by machines.
So somewhere between 40% likely we will be absorbed by machines and 60% likely we will still be
recognizably human in 1.8 million years with a vanishingly small chance that we will be recognizably human in 500 million years.
Interestingly, we're going to have to become significantly more intelligent to defeat machine intelligences or else our machine intelligences will be incapable of becoming much more intelligent than us and we outcompete them in some other way but in either case it's likely that our ability to process data will increase significantly. What's interesting about that is it could answer the question of whether we are capable of understanding or receiving messages transmitted over stellar distances but we just don't have the technologies.
In any case, as far as I'm concerned the fermi paradox is resolved. Putative signalling civilizations are too far apart, there's no proof we are even capable of hearing their signal, they won't coincide with us in time *and* there might be a "great filter" which wipes them out.
Or not.
Life has existed for approximately 3.5 billion years and the sun has existed for approximately 4.5 billion years.
Since we have no evidence of life other than our own and our sun has existed for only 4.5 billion years then by the principle of mediocrity we have to say that every newborn star in the galaxy that fits the category of our sun (i.e. G class stars) will develop life after 1 billion years and life will continue once it has gotten started in spite of extinction events and that there shall be at least five extinction events during the 3.5 billion years of life.
What can we say about the species existing? Well nothing much during the entire period because we don't have the data but we can say that right now we have about 10 million species right now and approximately 1 in 100,000 species are what can be classified as "living fossils" or in other words species that have survived a significant percentage of the time that animal life has existed. Some of these living fossils have been around for 450 million years and it's therefore arguable that they have been around since the beginning of vertebrate life. Thus we can say that some species have existed unchanged and have not gone extinct in the entire period that animal life has been around and have passed through 5 major extinction events. That said since we have no examples of living fossils longer than the period of animal life (500 million years) we cannot argue that they last any longer than 500 million years. We have to say that's the top line.
If we accept the principle of mediocrity for species we can therefore say that 0.001 percent of all species will be long lived during the 4.5 billion years after a star forms. According to the principle of mediocrity we also have to say that 1 in 100,000 of all *intelligent* species will be long lived. Intelligent species are us. Only one species. 1 in 10 million. So one in 100,000 x 10 million intelligent species should therefore last 500 million years and the rest will last the normal period of time at the most which is 2 million years for a species.
Now 100,000 x 10 million makes 100,000,000,000 is 100 billion. Which gives us an answer that there are *no* long lived intelligent species in our galaxy and in fact there is only 1 long lived species in 100 galaxies. But let's ignore that inconvenient fact for now.
Since we have no evidence of any intelligent species that last 500 million years unless it is us (and we're not there yet) then we have to leave the putative long lived species out of the picture. Even though right there we have resolved the fermi paradox.
Now the interesting thing is this: intelligence clearly isn't normal because it's only 1 in 10 million species. So can we apply the principle of mediocrity to it? Hard to say, but let's say that we can at least with regards to *other* potential intelligent signal transmitting species.
We need big brains, hands, and communication in order to generate human like intelligence (i.e. a species that can send signals. We have to exclude the possibility that stellar travel is possible because we haven't done it but since we have sent signals then by the principle of mediocrity so can other intelligent species like ourselves). How common are the conditions that lead to the development of intelligence? Big brains obviously, but that's not all. Whales, Elephants and Dolphins all have big brains and though it can be argued that they communicate, that's not enough. We need coordinated communication, big brains and tool making. Tool making requires hands. There are several species with hands but only us with big brains.
Coordinated communication exists in wolf packs and other pack animals. Coordinated communication also exists in herds of prey animals. So we can argue that the three things that together can lead to an intelligent species that can make tools are fairly common. We can likewise argue that the thing required to kickstart civilization in addition to tool making, coordinated communication and big brains is agriculture. There are several species that do this such as ants among others and if you broaden the definition to symbiosis there are many many species. So the conditions for the development of agriculture by tool making, big brained coordinated communication intelligent creatures are common. It just takes a long time. Not till the last 10,000 years. We can also say that it's only during the last 100 years of a 4.5 billion period since a new star formed that a species can send a signal. We can also say that it can't possibly have happened in the *first* 4.5 billion year period of the universe since all stars in that period were only type I stars (i.e. no heavy elements surrounding them) and we have to wait until type II stars are formed. So that cuts out the possibility of a long lived species appearing in the first part of the history of the universe. Now the first type II stars formed around 9 billion years ago so that means that in our galactic space we ought to have had at least two intelligent species that lasted 500 million years within the sphere of the nearest 200 galaxies. But we're talking GALAXIES!!! It's one thing to meet the ridiculous challenge of interstellar colonization and quite another to meet the challenge of intergalactic. Never mind that we have to assume that the two putative civilizations *stayed civilized* during the entire period *and* coincided with us. Likely? I think not.
But lets get back to signals from our putative signalling civilization.
What we can't say is whether such a signal will be received or indeed whether it can even be understood over stellar distances because although we have been sending signals for some 100 years via radio waves we haven't in fact received anything, so it may in fact be impossible to receive signals stellar distances via our level of technology.
But we can't say anything about that because we have no evidence so instead looking at the evidence (i.e. we can SEND signals and nothing about whether they can be received) let's ask the question how many species can send signals?
About 1 in 13 stars in the Galaxy are G Class stars which are the same type of star as the sun.
Since the principle of mediocrity demands that we aren't special we have to say therefore that all G type stars in the galaxy have life and the capacity to generate intelligent species.
That is to say there are 100 billion stars in the galaxy approximately 7 billion of which are G class. If G class stars are evenly distributed then since the average distance between stars is about 5 light years then that means the average distance between civilizations like ours ought to be 5x13 = 65 light years.
In our sphere shaped region of space we will have thousands of stars capable of signalling us at
the current time if the principle of mediocrity is true. So either we're not listening or
we can't hear or there are no signals.
But... the ingredients for life seem to be pretty common all throughout the universe so that's
probably not it. And if life gets started we have to assume that it will eventually lead
to something like us, so something else must be happening...
If we argue that over the last million years there has been at least one other contender intelligent species but only we discovered agriculture then that pushes the light cone out to 130 light years. There we have one possible explanation for why we haven't received any signals yet if they are understandable or receivable over stellar distances: the signal hasn't reached us yet.
So that's fairly straightforward. We're going to receive signals sometime in the next 50 years if we're not unique and we are capable of receiving or understanding the signal at our current level of technology.
So now let's look and see if we can determine how long we're going to last.
We can conjecture that we have about a 1 in 100,000 chance of surviving 500 million years
and 99,999 chance of surviving less than that. Australopithecines lasted about 2 million years
so we can argue that we have an evens chance of lasting 2 million years. Homo Sapiens has been
around for about 200,000 years and neanderthals also lasted about 200,000 years.
So we can say we have close to evens chance of lasting 200,000.
So our time could be up about now or else we have about a 49% chance of lasting another 1.8 million years and about a 1 in 100,000,000,000 chance of surviving 500 million years.
So most species around us last somewhere from 200,000 to 2 million years and hardly anyone lasts 500 million years. We have to rule out the possibility that any species lasts 4.5 billion years which are our time chunks for the formation of intelligent species from single celled life beginning to end.
So which is it? How long do *we* last.
Unfortunately we can't know without receiving any more signals. If we don't receive *any* signals within the next 65 years that would indicate that civilizations that are capable of signalling other civilizations are rarer than can be predicted just from obverving us which in turn says that we are missing something in our predictions from the principle of mediocrity. On the other hand, if we receive signals from *all* of the predicted civilizations then we can fairly confidently say that in a sphere of space containig 100,000,000,000 civilizations, one of them will be composed of a species that will last for 500 million years. It doesn't, however, say that during that 500 milllion years it will maintain the ability to signal for that entire length of time, just that there should be one long lived species.
It's also interesting to note that dinosaur killer events take place approximately 700 million years so it's quite possible that a long lived species will be born, life it's life and then die out in between those large extinction events, so large extinction events might not be what put an end to it.
In the case of the short lived species (200,000 to 2 million years) it's even less likely that
a dinosaur killer event would put an end to them because they simply don't happen often enough
and if there are lots in a volume of space then the small chance of an extinction event taking
place during the 2 million years for all of them is vanishingly small.
What *can* we say?
Well during the last 200,000 years there were at least two and potentially up to five competing
intelligent species in the same spot and only one of us survived so we can say that in a 200,000
year space an intelligent species has about a 50% to an 80% chance of being outcompeted by another intelligent species and about a 50% to a 20% chance of surving the competition.
Since we can rule out extinction events such as dinosaur killers doing us in and we can rule out visitation by star faring aliens we have to assume that it's competition from another intelligent species right here on Earth. We do in fact have a candidate: sufficiently intelligent machines. So let's look at that.
If we then take the roughly 49.999999% chance of being wiped out right now multiplied by 20-50%
we have thus about 10-25% chance of surving competition by another species plus 49.999999% of surviving at least 1.8 million years and some small fraction of surving 500 million years then it's at worst about a 40 per cent chance of extinction with a 60% survival likelihood and at best 75% of surving *if* we are the better fitted to the conditions.
Now that's where it gets interesting.
Since the only example we have is of a fitter species outcompeting less fit species, what's so
special about us? Are we more violent, more cooperative, simply better at acquiring resources or what?
In fact in the case of the neanderthals and the denisovans they have left 6% of their DNA in us and somewhat around 10% in total including
other species. So statistically if it's us that get outcompeted it looks like we don't get wiped out, we get absorbed.
Since what we're most likely to be facing in the near future in terms of competition is that from our machines and
more specifically competition from intelligent machines we can argue that if we are to go extinct we probably will hold out for quite
some time and eventually get absorbed by our machines which will be somewhat like us and have some
of us in them and thus we can probably safely rule out an extinction event predicated by an unfriendly AI
in a hard takeoff scenario, though we cannot rule out partial extinction by machines.
So somewhere between 40% likely we will be absorbed by machines and 60% likely we will still be
recognizably human in 1.8 million years with a vanishingly small chance that we will be recognizably human in 500 million years.
Interestingly, we're going to have to become significantly more intelligent to defeat machine intelligences or else our machine intelligences will be incapable of becoming much more intelligent than us and we outcompete them in some other way but in either case it's likely that our ability to process data will increase significantly. What's interesting about that is it could answer the question of whether we are capable of understanding or receiving messages transmitted over stellar distances but we just don't have the technologies.
In any case, as far as I'm concerned the fermi paradox is resolved. Putative signalling civilizations are too far apart, there's no proof we are even capable of hearing their signal, they won't coincide with us in time *and* there might be a "great filter" which wipes them out.
Or not.
Saturday, 29 October 2011
Abiotic oil not false after all
So although this is unusable, it's interesting to note that not only is there abiotic methane (i.e. natural gas) formed in a gigantic deposit on saturn's moons, but it turns out there is also abiotic oil and coal in space. Perhaps some of our own oil and coal were abiotically formed?
This is only interesting from a scientific perspective however, because even if large deposits of abiotic oil or coal were formed, we've still found most of it and still are unable to continue increasing production endlessly. Peak oil is still going to happen because it's a limit on effectively how fast you can pull it out. Where we're arguing is really about decline rates, but regardless this post is about abiotic oil.
Turns out that "Prof. Sun Kwok and Dr. Yong Zhang of The University of Hong Kong show that an organic substance commonly found throughout the Universe contains a mixture of aromatic (ring-like) and aliphatic (chain-like) components. The compounds are so complex that their chemical structures resemble those of coal and petroleum. Since coal and oil are remnants of ancient life, this type of organic matter was thought to arise only from living organisms. The team's discovery suggests that complex organic compounds can be synthesized in space even when no life forms are present."
Interesting. We apparently haven't learned all there is to know about petroleum just yet.
This is only interesting from a scientific perspective however, because even if large deposits of abiotic oil or coal were formed, we've still found most of it and still are unable to continue increasing production endlessly. Peak oil is still going to happen because it's a limit on effectively how fast you can pull it out. Where we're arguing is really about decline rates, but regardless this post is about abiotic oil.
Turns out that "Prof. Sun Kwok and Dr. Yong Zhang of The University of Hong Kong show that an organic substance commonly found throughout the Universe contains a mixture of aromatic (ring-like) and aliphatic (chain-like) components. The compounds are so complex that their chemical structures resemble those of coal and petroleum. Since coal and oil are remnants of ancient life, this type of organic matter was thought to arise only from living organisms. The team's discovery suggests that complex organic compounds can be synthesized in space even when no life forms are present."
Interesting. We apparently haven't learned all there is to know about petroleum just yet.
Subscribe to:
Posts (Atom)