Thursday, June 26, 2008

Catching up with LoneRubberDragon:

The full response is here. The primary discussion centers on this:

I defined engineering as science + intelligent design. LRD suggested science+ID+goal, which I won't quibble with except to say that ID implies a goal from my viewpoint. LRD calls this engineering1.

LRD then defines engineering2:

"Engineering2 = (1) finite-applied-modality modules + (2) combinatorial-heirarchical-exploration of modules + (3) utility-function-biasing goals for judging module-fitness + (4) a medium to run the evolution on.

To exemplify engineering2 in industry; in efficient integrated circuit layout, engineering2 now does what was once done by humans by hand and engineering1. Today with an engineering2 method, one defines: ..."

Basically, this is the observation that evolutionary algorithms have done some amazing things solving certain problems, hence the notion that evolution provides us an independent source of designs which don't require intelligence. Also note that LRD correctly asserts that evolutionary methods have replaced some rather tedious jobs that engineers previously did regarding circuit layout. No problem with this. Before responding to this, I should at least be happy that LRD hasn't taken the current scientifically and constitutionally valid position that ID doesn't exist. This is certainly a breath of fresh air.

The problem I have with LRD's engineering2 definition is this: Where did the '(1) finite-applied-modality modules + combinatorial-heiracrchical-exploration modules + (3) utility-function-biasing goals for judging module-fitness + (4) a medium to run the evolution on' come from? They are always the result of engineering1: Intelligent Design. What we are talking about is a very limited solution to a limited problem. For the most part, usage of evolutionary design algorithms is never accompanied by a decrease in the employment of engineers. There are simply too many problems out there that aren't amenable to evolutionary methods.

Now some one will protest that evolutionary algorithms can be designed to solve any problem that engineering1 can solve. Yes, this is true. On the other hand, it is impossible to do this without an engineering1 infrastructure based on all of our current experiences. In nature, we also must consider that we only have one basic evolutionary method using DNA. In engineering, we are free to concoct all kinds of subtle, problem specific variations on evolutionary methods that will only solve one kind of problem. The fact is that evolution isn't a general paradigm that replaces engineering1. It merely solves some very narrow classes of problems after 99% of the ID work is already completed via engineering1. Hence, the correct way to look at the impact of evolutionary methods on circuit layout is this: Before: 100% engeering1. After: 99% engineering1 + 1% engineering2. Not many engineers will lose their jobs on this.

Taking these observations and applying them to life, I believe that God did design creatures to undergo limited biological change to fill niches and fight diseases. From an optimization theory viewpoint, evolutionary methods are good for keeping a system stable that is already near a local, but dynamic optimum. Nothing magical. No supernatural powers. Just another mundane tool for the engineer's toolkit.

57 comments:

LoneRubberDragon said...

Great arguemnts in your rebuttal.

Now, you have asked where I think the following things come from, [(1) finite-applied-modality modules + (2) combinatorial-heirarchical-exploration of modules + (3) utility-function-biasing goals for judging module-fitness + (4) a medium to run the evolution on.]; where they may have arisen from. To give some credence to a natural theory, without denying God, but to illustrate the potential of the methods, I believe the methods, applied to the natural origin of live, given the context of natural physics, that:

[(1) finite-applied-modality modules], corresponds to the natural atoms and molecules, that could be found on the early earth. The elements themselves, came from stellar fusion, combining similicity into complexity, taking helium and hydrogen and gravity, through stellar fusions and stellar supernovas, to create all of the natural elements, like pockets watches assembled without a maker, like Uranum235, with 92 protons, 143 neutrons, and 92 electrons in 18 quantum electron shells, which is more gears than a pocket watch, more cams than in an engine, and more structured bond capabiity than two mere human designed jet engine parts in a tornado. Carbon, is especially well structured to form carbon-carbon double bonds, carbon-hydrogen(x) multi bonds, hybrid electron cloud bonds, and bonds with oxygen, nitrogen, and numerous other molecules. It is the best natural LEGO piece of nature, and occurs quite concentrated in numerous supernova remains, as some older generation stars fuse masses of carbon. Earth, by natural gravitational methods coalesced relatively large amounts of carbon, calcium, silicon, oxygen, and other materials useful for a complex molecule potential to occur, but not yet. The molecules came from natural chemical reactions in the newly naturally created complexity of elements derived from the early universe's simplicity of Hydrogen and Helium.

[(2) combinatorial-heirarchical-exploration of modules], corresponds to the combined exploration of all of the natural compunds present in the oceans of the early abiotic earth chemistry. In the oceans, was water, carbon residues, metal ions, salts, minerals, molecules like methane, oxygen, sulphur compounds, hydrogen, hydrothermal vents pouring out numerous chemicals from heated reactions, impacting asteroids bearing carbonaceous materials, sunlight, lightnings, lavas, etc.. There can be 1000 stable molecules of signifigant reaction concentrations mixing in the early ocean. Now 1000 chemicals in a 2 molecule reaction matrix, has 1000*1000*A possible reactions, or A*1,000,000 possible 2 molecule reactions. The A represents the average number of reactions possible in any one matrix bin. For example, one molecule may have a high A(n) with 10 states of vibrational, chiral handedness, electronic energy configuration, polarity, and steric (stereoscopic) catalytic or bond reaction sites for three dimensional molecules. Some molecules may have only A(n)=1 stable mode of reaction, like a single atom of free hydrogen in a high pH thermal vent. Some molecules may have no reactivity, like a Neon, or Xenon atom. So lets say A(all)=0.0005 average forward reactions, given the initial mix of 1000 naturally present molecules. That means that 1,000,000*0.0005 molecules will be formed in signifigant reaction amounts, or 1,000 new chemicals. So after a period of time the ocean naturally has more complexity with 2,000 molecules of enduring reactive amounts. But that only counts for two molecule reactions. There is a three molecule combination that must be checked as part of the 1,000 original molecules. It has 1,000*1,000*1,000*B, or B*1,000,000,000 possible reactions. For example, Hx+Sy+Oz --> H2SO4 is a three molecule forward reaction. So say B is 0.000001, that also yields 1,000 new sustained chemical species. And so on combinaorially for C=1 to 1000. So lets say the sum of all combinatorial chemical analysis yields 10,000 molecules in the ocean. Feedback processes yield, now a 10,000 by 10,000 matrix of 2 molecular "specie" reactions with the new natural complexity. A similar A of 0.001 may be generated, yielding A*100,000,000 matrix nodes, or 100,000 new sustained forward reaction new chemical species. And so forth the combinations of 3, 4, to N chemicals in N dimensional matrixes, are analyzed naturally in the ocean. This growing matrix base of chemicals in a natural process creates a natural feedback loop, where the initial 1,000 natural molecules proliferate into millions of sustained chemicals. Chemicals including, primitive lipids, amino-acids, and the first blocks of RNA and DNA bases. All of these may polymerize under catalytic reactions in this feedback combinatorial chemistry of natural complexity from similicity. Polymerized amino acids forming protiens, which allow new species of catalytic reactions with their polar, hydrophobic, hydrophyllic, and catalytic properites. Polymerized RNA allowing its own catalytic sets of reactions, as well as polymerized DNA bases. These organics, form the base of a natural digital chemistry where species that catalyze each other increase their mutual numbers over reactions that don't. Complexity increases ever more, through natural means, from a universe that started with only Hydrogen and Helium.

[(3) utility-function-biasing goals for judging module-fitness ], corresponds to the feedback cycles detailed some in [(2)]. Species of chemicals that cooperate in catalytic cycles, "hypercycles", increase their numebrs, which is their fitness. It is not a force knwoing what to do, it is an affinity of catalytic and reaction ease, in an increasingly complex soup of compounds. Chemical reproduction (aka forward reaction hypercycles), and chemical durability (aka forward reaction product stability in time compared to shorter lived chemical species that would otherwise breakdown), are the utility functions of success, just like macroscopic evolutionary forces, but here, on the molecular scale of chemical evolution. Memory is also recorded and mutation is possible, in the factors of "code" micro-units in a polymerization (digital chemistry, e.g. protien from amino acid, RNA strandlet from RNA bases). As [(2)] continues on the digital chemical level of protiens and RNA, one sees codes that proliferate, and others that do not react well, so why would chemistry, or biology in the future "want" to use them. But "want" is metaphorical here for chemical reaction rate (re-production numbers), and durability fitness to continue reactions over time. Judgement of chemical species is a direct function of reactions in time, versus poverty and short lived species. The grasshopper chemical, versus the ant chemical. "The goals" are "the reactions". "The utility" is "the reactions". You don’t see Xenon-Tetrafluoride in biology, because it is too hard to react. But Glutamine (a protien's amino-acid) is seen in biology, because it is easy to produce. Xenon tetrafluroide is unfit, like ordinary jet parts in a tornado, and the amino acid Glutamine is like jet parts with magnetic-coded LEGO connectors ready to bond to their favorite part. Utility, goals, and fitness, are natural products of nature.

[(4) a medium to run the evolution on], corresponds to nature itself, starting with Hydrogen, Helium, and gravity, which formed the stars, which formed complexity of atomic "watches" for all the elements, whch formed the planets, which formed the ocean, which formed 1000 chemicals in the ocean, which formed 1,000,000,000 chemicals in the ocean, which formed digital chemistry reproduction, which formed the first life, when digital chemistry and lysosome lipid bubbles hit on the first fre thousand sets of chemical reactions that were self sustaining, and self reproducing. Complexity, ever increasing, from natural means, from utter simplicity at the beginning of time, with nothing but the laws of physcis.

Now this doesn't deny God could have made the universe at the beginning, or that perhaps, String Theory could explain the nature of the unvierse's origin, and God provided the matter and energy to these laws to give us space and our life 12 billion years later, but along the way, after the first second of the universe, life could arise through natural complexity arising from natural simplicity by [(1)], [(2)], [(3)], and [(4)] in engineering2 (natural means).

With humans, we have a lot of learning to do about engineering2, so that, say, we can put [(1)], [(2)], [(3)], and [(4)], into a computer proeprly to simulate the ... the layout of a circuit, the derivation of new mathematics, the recognition of images, the learning of a language, the simulation of combinatorial chemistry, the generation of consciousness on a computer, et cetera. But when it is done "right" to "nature" by a basic [(1)], [(2)], [(3)], and [(4)] that captures the essence of a living thing with minimal instruction, and generates self sustained growth, like a zygote in a mother's womb, turns into a baby, turns into a child, and becomes an adult with all of the functionality we have, with supervision of nature at the beginning, genes during growth, supervision as a child, to skilled absorbtion of knwoledge and wisdom as an adult. And no one has ever said that intelligent humans don't need zero supervision, are not suceptible to the laws on nature, and that children cause the reuction in the emplyment of humans in general. babies start out requiring a lot of supervision and are very limited in application, like so-called "modern" engineering-2, but look at what children become over time! Yes there are many problems a child cant solve that an adult can solve, so we throw out children like engineering2 because adults will always be superior to their children? Nature had 3.5 billion years to work out subtle methods, for 35 years of advanced GA, and other evolutionary algorithms to get to insect level intelligence, is impressive, in man's hands, like evolution 1,000,000,000 times faster than nature. I agree, the future cannot come soon enough, but we need to consider the depths of evoltuon hand in hand with intelligence, both. Not many engineers will lose jobs to these things for a while, but Gary Kasparov, is now the former world champion of chess to a computer, that "unsettled" him.

Many of these cybernetic prosperity and destruction engineering2 "rules" outlined in the combinatorial chemistry discussion regarding; keeping good words, dispersing bad words, cooperative wisdom, following right paths, wrong paths leading to destruction, and so forth; can all be found throughout the Old Testament book of Proverbs, when rightly dividing the word.

And I hope there is a God to save the souls who have passed on, for too many scientists want to deny God and soul, which I do not support. And a science without a soul itself,cannot save us by their very mind set.

How does this sound? It is not me who speaks, but I believe it is true words I hear and write.

LoneRubberDragon said...

errata: "Nature had 3.5 billion years to work out subtle methods, for 35 years of advanced GA, and other evolutionary algorithms to get to insect level intelligence, is impressive, in man's hands, like evolution 1,000,000,000 times faster than nature."

should read: "... [100,000,000] tmes faster than nature."

LoneRubberDragon said...

One more modification, " ...[upwards of 100,000,000]... "

Looney said...

LRD, I will point out another post on this topic which I had previously made here.

Much of the rhetoric claiming that evolutionary methods are valuable center around layout problems like the traveling salesman problem (TSP). In life, however, the vast majority of the problems are sizing problems. The simple test case in the link shows where evolutionary methods break down.

On the points:

[1] actually, amino acids per DNA. The circuit layout problem and the TSP problem work with abstract blocks that are constrained. For example, TSP has n cities, where each city occurs exactly once in the genome, which isn't very natural. Not sure how many atoms are represented by the abstract block of the circuit layout, but it is probably a huge number. This is a monstrous gap between engineered evolution algorithms and natural evolution.

[2]. Naturally occurring amino acids don't combine with one another. They need a biological machine to do it. Not much recombination will go on here. Regardless, we are now trying to make a biological machine based on pure trial and error. Isn't that like putting silicon dust into the drier and hoping for a computer to pop out?

[3] Fitness functions. Yes, the engineer can speed up convergence dramatically if he knows how to play with these. Does life have an engineer?

More importantly, ecologists claim that the fitness function is "ecological balance", whatever that means. From an optimization viewpoint, it means we have a time scale of decades for a single convergence iteration and the number of degrees-of-freedom is the sum of all the genomes in the system.

[4] The medium is nature, but with a biological machine that must exist first. The computing machine must also exist in its entirety before a circuit layout operation can be done.

In the end, no progress has yet been made to demonstrate engineering2 without it being wholly dependent on engineering1. Where engineering2 is successful, it is really just a niche application of an algorithm that only resembles life. This simplification cuts design space sizes by astronomical factors and is a key part of their success.

My bigger concern is that I have personally seen projects derailed when someone demanded usage of evolutionary methods where classical optimization was already working. As the link indicates, sometimes evolutionary methods are spectacularly inferior.

Looney said...

LRD, I will give my little summary to this problem:

When the theologians (yes, Darwin was only trained as a theologian - as were most of his initial supporters) invented the meta-narrative of evolution, they denied the supernatural creative powers of God. Now in the 21st century, we are starting to understand a bit of the magnificent machinery of life which Darwin never understood.

Now, depending on whether we are a creationist or an evolutionist, are interpretation of evolutionary methods will go one way or another. For the creationist, genetics solves some problems, as God designed it to do. For the evolutionists, there is the wild-eyed belief that all of those supernatural creative powers of God can be harnessed if only we can find the right evolutionary method. It ain't going to happen.

Delirious said...

I must admit that both of you have talked over my head here lol. I'm also a little ADD, so had to skim some parts. ;) But I did have one thought about all of this. You both spoke about the whole "tornado" theory, and Lonerubberdragon (I really would like to know where that name came from lol) said that he believes in Evolution and Intelligent Design. I think the difference between him and me is that the intelligence I'm talking about is more intelligent than the intelligence he is talking about. In my "intelligent design" world, the designer doesn't have to rely upon random events, or chance "gluing together" of organisms. In my defition of intelligent design, the designer is more than intelligent, He is all powerful, and can command the organisms to come together, and doesn't have to wait billions of years for them to happen upon each other by chance.

Looney said...

Delirious, don't feel bad. Evolution is the only meta-narrative (err, theory) that is always defended based on the most recent ideas and complex explanations. Unless you have some idea of the most recent goings on in technology and science, it is very hard to follow - even for a technologist.

In the end, the 'theory of evolution', which has been treated as scientific fact since the 1870's, is always proven based on things that are less than 10 years old! LRD is using the latest forms that are popular. Another 10 years later, it will be something else and the current arguments will be forgotten. The problem with putting an end to the theory of evolution is that it always evolves into something a little different and we have to fight it all over again.

LoneRubberDragon said...

In my definitions, engineering1 (that Looney and I share equally) is most definitely the fast and directed engineering of analytical and creative methods. That definition I never denied or downplayed. The engineering2 (that only I will share) is definitely the slow and brute force method of base rules being combined into higher rules through natural prosperous methods (Proverbs). Both have a place in engineering.

For problems like language understanding, and character/image recognition, there is no concise ready-analytic approach to define the problem other than complete supervised development of definitions of an algorithhm for doing the task. Engineering2 methods can do that on a basic level, taking basic rules, and searching the space of combinations for fruitful relationships of data.

Most powerful, is when engineering1 and engineering2 are combined. Engineering2 does the brute force searches for fitness, and engineering1 does the collection of useful abstract rules of thumb for making another level of engineering2 to explore. When engineering1 and engineering2 are fused, one has natural parallel or serialized exploration of combinations, and engineering1 serves as a meta-method of analytical fast utility analysis in place of much slower natural processes. This forms a synergy between the two methods. One can't live without the other. Engineering2 takes millions of parallel operation years naturally, engineering1 doing engineering2 with combinatorial chemistry in the pharmaceutical industry, finds new molecules natural evolution would take millions of years, but it is a brute force engineering2 method being encapsulated in that hazy engineering1 goal search in combinatorics.

QUESTION: Looney, what citations can you give that say proteins cannot polymerize using natural catalysts in the early ocean, in place of modern biological molecular machines? They only need water bond catalysis, and you say it is "proven" that there are no natural catalytic molecules, minerals, metal ions, pH variances, temperature variances, etc. in the natural parallel combinatorial chemistry? I'd like to see the proof against natural catalysis, or even automatic polymerization of amino acids with their water bonds primed for polymerization?

Engineering2 is a young science, like a young child, only 30 years old with heavy computer usage. To say it will always remain a niche, is like saying young children, will always remain subservient to their elders. And maybe that is true, which is why generations must pass for their children to reach their true potential?

Even a God involved with the details of designing life, in His Mind must think how to combine things. His mind makes an exhaustive combinatorial search of every possibility, and the ones that stand out become part of the prosperous methods. If God did not do a parallel brute force search, how does God's Mind know something? It is a process of mind and process to simulate these little atoms in this little universe that He Made. So even God has to use some engineering2, or else He Doesn't Know the answers to begin with in an exhaustive combinatorial search. or does God have a God that gives Him the answer from a higher plane than God Himself?

LoneRubberDragon said...

To Looney, you argument on protien polymerization requiring complex molecular machinery is a potential straw man, in that it should read modern protien synthesys requires *modern* complex molecular machinert form polymerizing. And the argument implies that there are 0 in say 10,000 simple molecules in the early earth, that would allow natural simple polymerization in a basic level to scaffold heirarchically toward *modern* life potential. If you knock out the rung, saying implicity that amino acids are not LEGO like at all in a natural early earth solution, then you are correct, live would never occur, and God would have to snap the LEGOs together by hand to jump start life, because amino acids have a natural chemical barrier preventing simple compound assisted catalyzation into polymers, lacking the *modern* molecular machinery we see in *modern* life.

LoneRubberDragon said...

To Delirious, perhaps I can capture Looney's argument in its essence fused with my termonology:

Looney + LRD, theory: "Abiogenesis is impossible, and requires a creator, because in natural combinatorial-chemistry, in an energy-open-system early-earth, that *it* will stall-out, at some point finding among the combinatorial product of thousands of input molecules, no new forward reactions in all those molecular combinations, and thus this prevents the exponential saturation of feedback combinatorial-chemistry, that can lead to the genesis of the first primitive non-modern life, using natural means. There are no catalysts for further reaction, no energization by sunlight pathways, no more polymer combinations possible, a complete dead end energy barrier, in among the maximal molecule combinatorial potential of the early earth chemistry, on every individual molecule branch.

That is quite a complicated exhaustive proof, based on The Bible,and the ocean/lake chemistry, of early earth. I've never read papers saying that natural ocean chemistry has a combinatorial bottleneck molecule formation energy barrier, even through catalysis and photon energizing of molecules. Miller-Urey experiment alone, produced a tarry substance whose molecular composition was beyond analytic analysis. Early earth was running thousands of Miller-Urey type experiemtns in parallel, in lightning, sunlight, thermal vents, on mineral sand beaches, in dark parts of the deeper ocean, in infalling comets with chemicals and carbonaceous meteros with carbon compounds. I fail to see how nature reaches an absolute limit in compound formation, and polymerization through catalysis in the great variety of chemicals available in the primordial soup.

And my name LoneRubberDragon comes from interest in the character of midaeval dragons, not the old dragon, that old serpent, of The Bible. Though it does serve as a natural lighting rod for controversy!

LoneRubberDragon said...

The argument is like saying, take the first 92 elements of basic mathematics and how they can be bonded for proofs, and the methematics proofs will be finite, and not infinite in derivation from the root of 92. Little like saying Godel-Turing can "prove" an arbitrary set of atomic rules that can be combined in some ways, will *always stop* generating new rule combinations.

LoneRubberDragon said...

http://wiki.cotch.net/index.php/Amino_acids_would_not_polymerize

argues that there are channels in the wide ocean that allow polymerization, or catalytic polymerization. RNA which polymerizes easier than amino acids can be the facilitator to protien polymerization through catalytics reactions, too.

Amino acids *in pure water* don't polymerize, granted, but the early ocean had numerous mineral and other molecules that could aid the energy barrier by catalyzation, to help make the water bonds.

LoneRubberDragon said...

And for the fad aspect of complexity, things may come and go, but it never goes away. Neural netowrks are still around, bayesian analysis is still around, game theory is still around, Godel's incompleteness theory is still around, Chrusch Turing theory is till around, compelxity and chaos theory is still around, cybernetics and system theory is still around, and so is classical intelligent design engineering1 theory, is around, still. And if you want to attack big problems like designing life, or complex systems, you can't live without engineering2 as much as you can't live without engineering1.

One could say among us dirty rags humans, that following every word of Truth from God, is a fad. Moses goes up to the mountain to speak to YHVH, and when he returns, they have a Golden Calf. The tribes of Israel so often are removed from following God in favor of men's traditions, to the point God threatens divorce of Israel. Throughout the Bible, God and His True People exhorts us all to come back to God because we lose Him like a fad. But to call God's Real Truth a fad, and a niche, because we cannot follow His True Way, misses God's True Power.

The same can be said for the young engineering2 methodology, still in its infancy, like us "little children" in God's eyes.

LoneRubberDragon said...

And I have been reading some of your other posts on technical aspects making engineering2 tools "useless". I can agree that most software of that nature can be mal adapted, to expressing general engineering2 algorithms. Changing a language can even change, or prevent, what you can possibly express, because they have different modes of expressing the same though, possibly putting the natural discovery of a new compound rule beyong reasonable time, in *their* syntax, like hoping for a jet from a tornado of old parts that don't magnetic code bond to each other.

Another problem is problem-dimensionality or size, when put on a serial computer. In nature, one often has a great parallelism, like quintillions of chemical experiments exploring millions of dimensions in parallel, like in the early ocean. Asking even the fastest single or dual core serial processor, to simulate an aspect of human parallel neural thought, may make a problem *unacceptable* even *useless* in time required for daily engineering.

But when you have the right formulation language, that quintessentially captures a very-basic set of rules, and combines them, and searches heirarchical combinations, and uses parallelism, one can solve problems, no analytical method can touch, like human thought. Even if we are not there yet wholly, in software and serial computers, it doesn't mean that we *can't* be there today, or in the future.

It's like saying what use is electricity theory to daily engineering life in 1800, it is foolish to condemn judgement so early on, of such young tools and ideas when computers are coming of age.

LoneRubberDragon said...

For example of application, if you took that "AI" software package you were trying to solve a problem with, a package hopefully using combinatorial heirarchical brute force inference rules, and put the same code on 10,000 computers where each one analyzed one parition of the problem, and every 10 minues share the best 10 answers among themselves, I bet you would have found that answer in a reasonable time, and that there can be some temporal hueristic synergy of the parallel partitioned sharing processors, in doing the method, each one searching their parition deeper and sharing their best findings in parallel.

To look at a single processor application of your problem, and call it useless, is short sighted to its proper application.

LoneRubberDragon said...

Additionally, in your AI application test to design something, why were you fiddling with parameters, when if you have the capability, could have embedded those atoms of the rules in the program to fiddle their own dials, and let the thing optimize itself? Then you could have abstracted yourself out of the twiddling problem, and had the computer doing the whole thing. You needed to jump out of the system one more level, to let the system handle your twiddleing problems.

LoneRubberDragon said...

Errata: "water bonds", regarding amino acid polymerization, should have been "peptide bonds".

LoneRubberDragon said...

L:Delirious, don't feel bad. Evolution is the only meta-narrative (err, theory) that is always defended based on the most recent ideas and complex explanations.

LRD: Life is complex, do you really expect a complex system to describe itself in a short span? Look at photosynthesis charts and metabloism charts, theya re not simple either. Some things take time to convey. Engineering1 is complex too, and takes most people 4 years at college to get a degree. If it were easy explanations, it could be done in an afternoon, and that is for intelligent design practices.

L:In the end, the 'theory of evolution', which has been treated as scientific fact since the 1870's, is always proven based on things that are less than 10 years old! LRD is using the latest forms that are popular. Another 10 years later, it will be something else and the current arguments will be forgotten. The problem with putting an end to the theory of evolution is that it always evolves into something a little different and we have to fight it all over again.

LRD: Combinatorial chemistry abiogenesis is decades old, even dating back to Opharin AD1924, but still in infancy, as knowing of DNA itself, is only decades old. Evolution algorithms are as old as Darwin's concepts. Brute force blind searches are as old as mathematics. Not looking backward in history shows a lack of historical context, and not looking at current research to keep up with studies is unacceptable. Both are required to keep a well founded knowledge of truth, potentialities, and theories, as time progresses, and data accumulates. And, yes, topics like neural networks and expert systems were hyped in the 80's and 90's, but that doesn't make them invalid methods today for the right applications, or that nature or man can follow the ideas naturally or in implementations to extend our engineering2 (evolutionary) and engineering1(analytical intelligent design) capabilities, both.

Looney said...

LRD, you should slow down a bit! We can only deal with one thing at a time! Which point would you like to discuss?

LoneRubberDragon said...

*grins* Sorry about the burst transmissions of data. Please, feel free to ask me questions that are pointed in disagreement to specific topics, or data that hard countermands some of my points without overly anecdotal evidence.

To me this is not much, but I am used to firehoses of information when doing industry engineering work.

To add some color for background reading material to get all up to speed, regarding the core and peripheral of these related subjects, these provide a little historical and current topic information regaring these subjects.

http://en.wikipedia.org/wiki/Combinatorial_chemistry
http://en.wikipedia.org/wiki/Hypercycle_%28chemistry%29
http://en.wikipedia.org/wiki/Self_organization
http://en.wikipedia.org/wiki/Computational_chemistry
http://en.wikipedia.org/wiki/Abiogenesis
http://en.wikipedia.org/wiki/Origin_of_life
http://en.wikipedia.org/wiki/Aleksandr_Ivanovich_Oparin
http://en.wikipedia.org/wiki/Miller_urey
http://en.wikipedia.org/wiki/Neural_Networks
http://en.wikipedia.org/wiki/Expert_systems
http://en.wikipedia.org/wiki/Genetic_algorithms
http://en.wikipedia.org/wiki/Brute-force_search
http://en.wikipedia.org/wiki/Parallel_computers
http://en.wikipedia.org/wiki/Evolutionary_algorithm
http://en.wikipedia.org/wiki/Chaos_theory
http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems
http://en.wikipedia.org/wiki/Halting_problem
http://en.wikipedia.org/wiki/Turing-complete
http://en.wikipedia.org/wiki/Complex_systems

Looney said...

LRD, since you said you were familiar with the circuit layout application of evolutionary algorithms, I would challenge you do do something:

Itemize all the ways that ID impacts the evolutionary method for circuit layout.

The next thing I would like is to know what are the aspect of circuit design that aren't done by evolutionary methods, but I suspect that would be too difficult for anyone to do.

In my industry, we mostly use equation solvers to solve problems up to millions of equations rapidly on parallel computers. We could never do this by hand, nor could we do them through evolutionary methods as this would take decades on parallel boxes. Still, this comes no where close to something that we would equate with design. It is just a glorified calculator. To me, the use of GA to solve layout problems is just the same. We have a calculator that handles something tedious, but the serious design work is all engineering1. What do you think?

Also, I am curious how long you have been in software. My first programming course at the university was in 1976, so I have seen a lot of change over the decades. What I have seen in the last few years is a reaching of a plateau where progress is slowing rapidly while costs are skyrocketing. The same happened in the aircraft industry, but shifted by 40 years. Thus, I tend to be very skeptical about future breakthroughs that revolutionize things.

LoneRubberDragon said...

Intelligent design impacts the fields of:
(1) N-port device characteristics, which are analytically implemented, e.g. transistors, diodes, pentodes (tube), etc..
(2) Line characteristics to model unit interconnects, e.g. transmission trace effects, like delay, capacitance, inductance, wave phase distortion, signal reflections from end modules
(3) Complex analysis to capture frequency bode responses.
(4) Fourier and laplacian analysis for steady state and transient analysis.

Evolutionary deisgn (can) imapct the fields of:
(1) Arrangements of randomly placed power lines and circuit elements
(2) Evaluation of random designs for "signal" characteristics, e.g. amplification, low input imepdance, current mode, voltage mode, metastability (for digial circuits), power consumption efficiency.
(3) Library storage of sub units that have good characteristics to use as "chunks" of circuit of their own right.
(4) Optimization of biasing and frequency elements for further signal fitness. E.g. transistor circuit biasing for maximal throughput of input to output bandwidth, automated filter design, pulse wave reshaping characteristics to criteria.
(5) Optimization of bulk circuits, e.g. variation serches for fewest number of parts, layout optimization to minimize trace lengths and fanouts, power optimization of global circuit given sub-optimal sub-unit use and atomic module use, variations of elements used e.g. area of transistors, capacitors, traces, etc..

I agree, if intelligent design (analytics) has found large scale analytical methods that already have encapsulated the core of what an evolutionary program can do, but in more time, so it is *always* good to use the analytical methods, when they are available. But the unknown answer zones can use brute force searches and evolutionary optimization methods, for solving, say, for the most optimized of circuits, that can benefit from evolutionary variance methods in sticky spots because the analytical methods have limits, and humans have finite speed, and computers have the ability to do such things. Old style radios, and early 1970's electronics, and such, have amazing circuit layouts very different from modern black box unit approaches, and are often more optimized for their application than chained and heirarchical design methods. Of couse they were intelligently deisgned by great engineers back in the day, but are often shrouded in proprietary protection. Evolutionary methods allow a reverse engineering aspect of these great old methods, and restore them to their public place by natural algorithmic discovery. Some problems are seen in modern analytical compilers that turn into "bloatware", because the compiler is not sophisticated enough to do anything but analytical optimization techniques within restricted domains. These may be amenable to human like levels of optimization with properly descibed evolutionary algorithms in, say, military applications, where high optimization, and robustness of design cannot be large-scale-analytically done, because analytical blocks blur into each other in highly optimized designs. Not that compartmentalized design doesn't have its place with its library of already solved analytical methods.

At this point, since time is finite for most projects, we are stuck with only the analytical tools, and trust, that we are not creating bloatware in software, or circuits that are not completely optimal in the time frame allowed. A smart computer of the future would make exhaustive searches of things no one engineer could ever hope to encompass in a short schedule.

Evolutionary methods can find quite novel approaches to problems that are unanalytical intelligent design in nature. Like neural nets that produce a table of information that is incomprehensible, but can read hand written characters for the post office mail system. Imagine making a VLSI system for that, analytically. But once evolution is done, it instantly turns into analytical methods, in that the evolved black box can be used as a module, or the evolution method yields a deep analytical insight that can make all future problems of that type/class rapid, when the mthod produces a comprehensible idea, but incomprehensible if you had to derive it from existing analytical methods.

Edison used brute force evolutionary methods at the turn of the last century to become quite a prolific product finder, towards a selection goal in his mind.

Well, in AD1978 I got a Sinclair Zilog, then a commodore VIC AD1980, a commodore 64 AD1984, started doing numerical differential methods before I knew what they were, a commodore 128 AD1987 and started doing machine language and planetary orbit simulators and such, then a PC XT AD 1989, doing fourier transforms, then went to college for digital electrical engineering degree, and worked in industry for radar systems, and systems code, then entered image processing and image processing systems 1995-present. I have been following cybernetics, systems. math, artificial intelligence, neural networks, differential equations, etc. for the last 20 years, but only have 10 years of industry work under my belt, economic-officially.

LoneRubberDragon said...

Regarding engineering, I blame the industry of too many cooks spoiling the soup, outsourching to non-experts, IP compartmentalizations, reduced defence spending after the cold war ended, and the ever shortening time to market competition, causing new engineers to trip over themselves, and the old engineers to leave the industry. Thats why computers have to take over everything to get to market in as fast a time with as perfect a product as computer-ly possible, removing the human element with its finite latency and finite education.

LoneRubberDragon said...

I can add that current engineers do not have enough intelligent deisgn methodology in breadth and depth to understand int inter system interactions produced in complex systems to make it an organized whole.

And the tools have their limitations too, like modern compilers are good with color syntax and database reference links for variables and functions to speed up some tasks in covering a system software, but I can imagine better systems that give even higher levels of overview of software. For example, implementing multiple state machines in software at hundreds of operation points in the linear code, could use a tool to allow the programmer to surf the state machine, to find action nodes, move blocks of action-transition code, etc. that are left to human hand even today.

And operating systems are nice for their coverage, but trying to surf the entire set of O/S code can also use top level tools to allow easy browsing of the entire machine O/S. Also, O/S like visual C compilers today, seem to bloat the assembly for even a short iterative register for loop with unnecessary instructions related to the O/S that make computers slower than a DSP compiler code of the same for loop.

It's really quite disconcerting, especially looking at a system like Vista. I have been collecting XP laptops and Borland Builder Compilers to alleviate Vista until they can iron out that more proprietary system than XP ever was. I tried running 3.1 MFC code on even an XP VisualStudio C compiler, and it didn't even work without massive forward compatibility revamping. Whatever happened to preserving the thousands of lines of historical code backward compatability???

I dunno, it can be exasperating, in software today when it takes 2 months to master graphics using *their syntax* that I can do in QBASIC in ten minutes. Something evil is afoot in industry these days, related mostly to money and time constraints.

LoneRubberDragon said...

If one of my old bosses is right, it seems the industry is going more to privatizing things and commoditizing things, like in the days of IBM and DEC, for softwares, firmwares, hardwares, and VHDL I/P, which also drives up some types of costs. Moore's law is hitting two dimensional limitations at the atomic scale, and if we don't reaccquire moore's law in three dimensional circuits, things may become dark days for analog/digital engineerings.

LoneRubberDragon said...

Just the mere fact that an interpreted language Java runs as fast as compiled C either shows that interpreted languages have suddenly become "science", or simply that C compilers have become bloatware optimizers.

LoneRubberDragon said...

ANSI C, 15 years ago, used to be a language modeled after assembly efficiency, but now they've spoiled the compilers on desktops, such that by late AD1990's, the know-nothings who wanted all of the money say like chickens with their heads cut off, lets use JAVA on everything!, when C is a perfectly good language when properly dividng the word. Now *that's* overhyped technology, that comes and goes like a fad, that's only good for web interplatform interface applications, their intended application.

In that aspect, I agree, that true *intelligent design* engineering1, is sorely lacking these days.

Looney said...

LRD, it seems your mind is overflowing with ideas!

I should back up on something for a moment. To me, the word "evolution" and the word "change" are exact synonyms. If something changed, then it evolved. If it evolved, then it changed. Some intellectuals have referred to evolution as a "framework" rather than a theory. This is true, in that "change" is the superset of all theories. If you look at a graduate level molecular biology text, they will freely switch from one physical law to another and when they are done, they credit "evolution" with the success.

My dictionary from 1950 gives an example for the usage of the word evolution: "the evolution of the steam ship". In this case, evolution includes intelligent design as a subset, which is how we use it every day. This situation creates a lot of confusion. I could, for example, claim that God evolved the universe and life in 6 24-hour days and I haven't strayed outside of the definition of the word "evolution". In fact, I wouldn't be surprised if He combined various elements of designs to make new ones, giving us the observable property that similar genes in differing animals would need to diverge at different rates.

Now if I pick an example from Dawkins, he has a chapter in one of his books that describes radar, then sonar, then bat echo location and claims that this somehow supports evolution. What I saw was: radar (ID), sonar (ID), echo location (why not ID too - per simple induction?). Somehow he didn't get this. You can't use ID examples to prove that non-ID design is feasible.

The problem as we go flittering around between various technologies and the words evolution and ID is that we need to keep our focus on what involves ID and what we claim doesn't involve ID.

My assertion is that in engineering, there is no real evidence of evolutionary methods replacing ID, for the simple reason that current evolutionary methods are ID based optimization algorithms which maintain a single step history (genome population) compute new guesses from old (recombination) and sometimes use a random number generator (mutation). Usually the evolutionary method argument ends up inadvertently exploiting the success of ID to claim credit for evolution. Next, we do a subtle definition shift by claiming that this validates non-ID based evolution. This is a non sequitur.

Comparing evolutionary methods for optimization theory, they are 0-th order searches and are useful only for certain classes of problems. Experienced engineers usually seek 1st or 2nd order methods (sometimes higher) when problems are well behaved for better convergence. (derived from Taylor series expansion in n-dimensions.) Thus, evolutionary methods are just one little branch of the tree of optimization theory. Optimization theory, however, is just one little branch of the mathematical tools needed by engineering. Finally, mathematical tools are just one of the branches of skills needed by a product design department.

Evolutionists, on the other hand, need to show why the little twig of evolutionary methods - which aren't known to exist absent ID - in the forest of ID skills needed for product design is really something that is capable of replacing ID in its entirety and has no need whatsoever of ID.

Isn't it just easier to say that technology always requires a technologist and just stop?

LoneRubberDragon said...

I promise you, I will get back to your newest post for digestion, analysis and synthesis, but for this moment...

I'll tell you a story of my last project. The customer wanted to be able to download new firmware functions that can be called from the fixed ROM code in the finished IC of a camera. So I make a function wrapper containing relocatable function pointer addresses for every function in the code, and made a #define syntax that makes coding easy to allow a totally different parametered downloaded firmware C function to nest itself peaceably into the fixed ROM code, without a whimper.

But I had this english engineer who was doing code testing and we needed him to work on some code modules to free up my time for other projects. He read my white paper on the code function system syntax, and flatly said, this is impossible, it won't work! He said that again and again for a week, trying to code. I had to take him to the side and say, "You see that camera taking pictures right now?", "Yes.", "You know how you've been running camera test code all last month?", "Yes.", "Well, the *whole software* runs using that syntax, get to learning it, so you can help write some new code modules for the camera, please!". And I admit, I wished *I* had a higher level tool to assist with the coding C #define syntax, and #define variable definitions for a flat memory system to optimize "optimized code" instructions, but when everyone learns and knows the convention, everything flows smoothly, but intelligent deisgn is a rarity these days, as much as evolutionary design.

That's why I am becoming a proponent of removing humans from the loop, and imbueing computers with evolved and analytical intelligence. Models taken from the science of science, and the art of evolutionary theories, both. For before one becomes analytical, one is evolutionary brute force. And once one had solved an evolutionary brute force, one become on that thing, analytical.

LoneRubberDragon said...

LOO:LRD, it seems your mind is overflowing with ideas!

LOO:I should back up on something for a moment. To me, the word "evolution" and the word "change" are exact synonyms. If something changed, then it evolved. If it evolved, then it changed. Some intellectuals have referred to evolution as a "framework" rather than a theory. This is true, in that "change" is the superset of all theories. If you look at a graduate level molecular biology text, they will freely switch from one physical law to another and when they are done, they credit "evolution" with the success.

LRD:Now I am not trying to be equivocating here, but reconciliating is a more proper word. Evolution is the changing of something with a template. Like lifeforms passing from generation to generation, or a system design being explored in maximal and efficient numerous changes. The framework amendation to the word evolution-change comes from the fact that there is a system required for the evolution to occur in, as it cannot occur in a vacuum, per se. Life needs DNA and a space for natural selection fitness. A design needs atoms and heirarchies to represent populations in a linear computer memory, and code to emulate variation and selection criteria in that abstract space, modeled after the natural evolution theory concepts. In abiogenesis, one needs atoms with electron shells, and a large amount of ocean to act as the parallel computer, and selection in the form of natural forward reactions that are prosperous over lesser reactions. Now it may be true that chemical evolution *does* stall out in a combinatorial chemistry exploration at some point, reaching a maximal complexity barrier, but to tell the truth, I find it hard to see, only because it is so complex a problem in exploring *all natural chemical reaction problems in all ocean environments* being examined *simultaneosly* and in *parallel*, that I can't even believe Biochemists with Nobel Prizes know the real answer. It is as hard as proving a negative.

LRD:Now I agree academicians may like to tout their theory more than they *ought too*, but it's their paradigm, and I can respect that. They predict that bacteria will gain solutions to never-before-existing antibiotics in a never ending chain of adaptations, or else, but, hopefully, if they are wrong, then we *can* cure all diseases with the right intelligent design DNA codes. I am potentially guilty here too, of touting more than I ought! If only to convey the ideas, so that it isn't only the Athiests getting the high paying jobs making computers intelligent,and Crhistians are relegated to computer users with their heads in the sand, in *this world*. But as an analogy, good Muslims believe they know God's truth, with conviction and faith. Good Jewish people believe they know God's truth, with conviction and faith. Good Jehova's Witness' believe they know God's truth, with conviction and faith. Good Mormons believe they know God's truth, with conviction and faith. Good Buddhists, Good Hindus,, Good Animists, Good Evolutionists, Good Quantum Physicists, Good String Theorists, all believe they know some ultimate truth, with convictions and faith. To me, it all sometimes looks like the song that Buffalo Springfield used to sing in the late AD1960's:

LRD:BUF:There's battle lines being drawn.
Nobody's right if everybody's wrong.
Young people speaking their minds,
Getting so much resistance from behind.
...
LRD:BUF:A thousand people in the street,
Singing songs and carrying signs,
Mostly saying, “***Hooray for our side.***”
It's time we stop, hey, what's that sound?
Everybody look what's going down.

LRD:I can only say, they all have aspects of God's truth to their words, or else they would not be *moved* by them, a spiritual unction, if I ever saw one. They all have ideas that cannot be utterly ignored, but read properly dividing the word, even divining the word. And of all people, Christians are people of every word of Truth that comes from God, even if Nebuchadrezzar is God's-tool speaking those words!

LOO:My dictionary from 1950 gives an example for the usage of the word evolution: "the evolution of the steam ship". In this case, evolution includes intelligent design as a subset, which is how we use it every day. This situation creates a lot of confusion. I could, for example, claim that God evolved the universe and life in 6 24-hour days and I haven't strayed outside of the definition of the word "evolution". In fact, I wouldn't be surprised if He combined various elements of designs to make new ones, giving us the observable property that similar genes in differing animals would need to diverge at different rates.

LOO:Now if I pick an example from Dawkins, he has a chapter in one of his books that describes radar, then sonar, then bat echo location and claims that this somehow supports evolution. What I saw was: radar (ID), sonar (ID), echo location (why not ID too - per simple induction?). Somehow he didn't get this. You can't use ID examples to prove that non-ID design is feasible.

LOO:The problem as we go flittering around between various technologies and the words evolution and ID is that we need to keep our focus on what involves ID and what we claim doesn't involve ID.

LOO:My assertion is that in engineering, there is no real evidence of evolutionary methods replacing ID, for the simple reason that current evolutionary methods are ID based optimization algorithms which maintain a single step history (genome population) compute new guesses from old (recombination) and sometimes use a random number generator (mutation). Usually the evolutionary method argument ends up inadvertently exploiting the success of ID to claim credit for evolution. Next, we do a subtle definition shift by claiming that this validates non-ID based evolution. This is a non sequitur.

LRD:We happily agree, saying the same thing here in different ways. First there is evolution, like Boole in the AD1700's coming up with Boolean Logic, by considering the nature of decisions, and their logics in combinations, discovering the patterns and rules, until he hit upon "the methodology". From that point on it became the domain of ID. Evolution on a thing, once done, is forever become an aspect of ID. But even then, it has evolved. De Morgan discovered more logic rules quite evident looking at combination tables of brute force, yielding analytical operators, AND, OR, NOT, and their inversion rules, and SUMS and PRODUCTS forms of arbitrary forms. Jacqard explored mechanical logic machines in his cloth looms, and brute force explorations of applications yielded US census taking machines in the 1880's by IBM. Babbage, exploring combinations of analytic gears in refined brute force came up with polymonial gear computers. Both in combination yielded adding machines. These then used tubes, and so forth. Evolution yielding ID yielding evolution yielding ID, in an Eternal Golden Braid. ID is always growing, but at it's fringes, evolution is always with it, as if it weren't (in man's hands) we would have known everything at the beginning of time, and invented everything old under the sun in the first few centuries of man. So even analytics ideas are evolving, to acquire the grwoing code of ID. ID never gets smaller (unless man mucks everything up). Evolution never disappears, but always transforms to make itself useful, as it always passes away, as ID grows and grows, but never without no evolution to begin with. Language doesn't spring forth in an instant, but evolves and generates a growing ID. Computers aren't springing forth in an instant, but evolves and generates a growing ID. Children don't spring into adults overnight, but evolve, and generate an internal growing ID within themselves. Only backsliding creates a shrinking ID. But evolution and ID live hand in hand, and one does not exist without the other, and ID is always superior to evolution once evolution has hit upon "the methodology". Which is a great failing I see in evolution theory or biology theory, that they are too fixated on only evolution to tout in massive quantities the ID that I believe is actually embedded in how DNA operates in modern biology. Micahel Behe, in "Darwin's Black Box", pg. 41, mentions how a single point mutation can have a systems level affect on an organism, replacing antennae with legs. It isn't lethal, and not very helpful to fitness, but it shows ID captured in evolutionary systems knowledge in DNA codes. They hardly ever talk about these things in these ways in ID or evolutionary theory. Yes, it could be designed, but yes it could be some pure analytic ID regarding systems naturally captured in early multi-cellular life. It's hard to say when God's not around to ask, how this arose over the 200,000,000*10[generations/year]*1,000,000[members of specie] of fruit flies over the ages. Is it natural or is it created? I can't quite tell. Of course it could be natural, and we intelligent human can take a hint and try heirarchical methods of chunking design exploration to hueristically speed up evolutionary algorithms to design a better antenna. (Pun and no pun.)

LOO:Comparing evolutionary methods for optimization theory, they are 0-th order searches and are useful only for certain classes of problems. Experienced engineers usually seek 1st or 2nd order methods (sometimes higher) when problems are well behaved for better convergence. (derived from Taylor series expansion in n-dimensions.) Thus, evolutionary methods are just one little branch of the tree of optimization theory. Optimization theory, however, is just one little branch of the mathematical tools needed by engineering. Finally, mathematical tools are just one of the branches of skills needed by a product design department.

LRD:I think this is agreed too. Evolution is around, but not needed everywhere when a large mass of ID has accumulated (and can be usefully accessed and unprivatized). But the ID cannot live without evolution at the fringes of unknowns, before they can become knowns and thus add to the ID mass.


LOO:Evolutionists, on the other hand, need to show why the little twig of evolutionary methods - which aren't known to exist absent ID - in the forest of ID skills needed for product design is really something that is capable of replacing ID in its entirety and has no need whatsoever of ID.

LRD:"which aren't known to exist absent ID" I would have removed only this, as an unproven of the combinatorial-depth exploration of the negative, in the first place. Otherwise, we are agreed, that evolution *cannot* replace ID. It is ID that continually *replaces* evolution, and it is *evolution* that must continually adapt to where ID cannot yet reach. And again, one can't "live" without the other. Only the infinite God that has done every evolutionary exploration, and converted everything into intelligent design, can be in a state where evolution is dead, and done away with. And even then, before time began, evolution is required, and then ID takes over everything, as always ever growing without any losses.

LOO:Isn't it just easier to say that technology always requires a technologist and just stop?

LRD:At this point in time in the universe on this planet, we can say it requires a technologist, whether a human, or a bacteria, or computer with a basic universe of rules, and a means to explore the fringes of ID with evolutionary exploration, and all who have a level of accumulated natural analytics from natural evolutionary generation. But in an empty part of space, would you call a star a technologist, for fusing simple hydrogen and helium into many of the natural heavier elements, and all of the other natural elements, in a supernova? Is a technologist gravity coalescing a nebula into a star and planets? Is a technologist the natural chemistry of oceans exposed to sunlight and volcanic vents proliferating new chemicals in durable numbers? Is a technologist some digital chemistry poking through the matrix of reactions to find durable and reactable and prosperous chemical codes? Is a technologist the first bacteria in millions of generations and quintillions of units floating through the ocean exploring new ways to live generation after generations. And so forth? We are none of us God, not me and not you, so we are not pure ID, but require evolution at our edges, always adapting, and always growing our ID core, closer toward God, but nowhere near His Perfection.

Looney said...

Hmmm. Much to agree with. Probably I would focus on your last point. what is the minimum computing environment for an evolutionary algorithm? What is the minimum biological device that can replicate DNA in an evolutionary manner? Are these more like technology? Or more like a mix of chemicals undergoing a nuclear/chemical reaction of some sort?

I don't know quite how to distinguish when something is or isn't technology, but we know a silicon crystal like quartz isn't the same as an integrated circuit.

There is another item to consider:

You mentioned the science of science. This seems a bit problematic to me. For example, we have accepted engineering1 = science + intelligent design + ...

Now some intellectuals have loudly proclaimed that "intelligent design isn't science and shouldn't be taught in the science curriculum", for which I have some sympathy. Clearly in the definition of engineering1 above, intelligent design is not equal to science.

This is another concept that seems quite useful to me, but something to explain with an example. Many people can explain Einstein's theory(s) of relativity. We can also cite the examples that Einstein said inspired him. No one, however, can identify and quantify the exact set of mental steps which allowed Einstein to formulate the theories of relativity, including Einstein himself. This is one of the key features of true genius.

So it brings up the question: Can we possibly scientifically analyze intelligent design, when we have no idea of what the details of the process?

Another example: When I was young, I loved math and could easily give answers to problems that the teacher put on the board with a glance. Unfortunately, I usually couldn't explain how I got the answer since so many steps were done subconsciously. Later when I was tutoring math and physics in college, the need to analyze individual steps became critical and I could understand better why people viewed certain problems as being complex which seemed like no-brainers before.

It is also seems to be a problem when intelligent people try to argue against ID in something or another: So many amazingly complex steps can be done in the human brain that we are frequently lulled into thinking no intelligence was required at all to accomplish a particular task. When we try to replicate it in a computer program and then attempt to use it in the real world, a large set of problems are frequently encountered that we hadn't foreseen.

So what do you think, is it possible that brilliant people can claim ID wasn't needed, when in fact ID was the key to success?

LoneRubberDragon said...

LOO:Hmmm. Much to agree with. Probably I would focus on your last point. what is the minimum computing environment for an evolutionary algorithm? What is the minimum biological device that can replicate DNA in an evolutionary manner? Are these more like technology? Or more like a mix of chemicals undergoing a nuclear/chemical reaction of some sort?

I don't know quite how to distinguish when something is or isn't technology, but we know a silicon crystal like quartz isn't the same as an integrated circuit.

LRD:To say something is a technology depends on observing the complexity and organization of a system, whether it is naturally developped with an accumulation of natural ID, or completely synthetically developped with the direct application of ID with spots of evolution for parts that are too complicated for discrete intelligent analytics. By observing the complexity of a thing, one determines the level of design. Quartz crystal is just a block of material, but can be lain with traces to make anything from an oscillator to a complex acoustic processing crystal. Same for silicon, which is just a crystal, but when sliced and polished and etched with circuits, becomes exceedingly complex, and has a kind of living word in time. Even a printed book is a technology to etch a form of non-living word, requiring a person to create a living word of the print in their mind. Measures of entropy are required in such analysis of technology. A crystal is a low grade level of technology. A gravitationally conpressed star fusing new elements in different depth pressure shell, for gaseous temerature support, is another form of low grade technology. Atoms and molecules with their electronic and spatial structure are another form of low grade technology. A circuit on silicon is a high grade technology because it takes a great deal of information to mathematically entropically define every trace and junction characteristic. The highest grade of technology is like completely utterly optimized circuits that use nearly the fewest parts possible, that look almost like a random lump of transistors, but then you set it in motion, it suddenly operates in a specific way using those incomprehensibly compressed circuit elements in layout. For example, many sets of components may serve more than one purpose depending on the multitude of modes of the circuit. That requires a high degree of simulation, variation, refinement, and integration, to reach a circuit so compressed that it is indecipherable to even a trained engineer, but when set running, becomes obvious as to its operation and function for every part. That is a higher grade of technology than even compartmentalized heirarchical design methods.

I have heard of experiments where people are taking bacteria, and removing DNA modules to find the smallest level of complexity that supports life. I have also heard of analysis of Turing machines and finite-automata that discover the smallest computer capable of some minimal processing unit reproduction or minimal infinite process time in the smallest code. In the 1970's they used to have core wars to find the smallest code that could distribute itself, and beat out against other codes doing the same top level goal. And in game theory tactics, I hear that tit for tat is the most compact temporal optimization of the Nash Equilibrium for many games, including the Repeated/Statistical Prisoner's Dilema, among other games, with unbalanced and statistical responses over times.



LOO:There is another item to consider:

You mentioned the science of science. This seems a bit problematic to me. For example, we have accepted engineering1 = science + intelligent design + ...

Now some intellectuals have loudly proclaimed that "intelligent design isn't science and shouldn't be taught in the science curriculum", for which I have some sympathy. Clearly in the definition of engineering1 above, intelligent design is not equal to science.

LRD:This is more of a semantic problem. Let's say you define in a custom dictionary:

(1) intelligent-design1, (adj) a theory that claims that anything complex, is the domain of God alone, as natural physics and information theory does not ever, and will not ever support any evolution at all, requiring only an intelligent designer God to permit something to occur, for man cannot even comprehend these things.

(2) intelligent-design2, (adj) a theory that says for the creation of things of complexity from simplicity, is both (1) the domain of all known prior methodical *science* also known as engineering1, and (2) evolution theory *art* can be used, beyond and outside of all known prior art methodical science, like massive sized problems with combinatorial complexity like combinatorial chemistry, or NP-incomplete difficulty problems, like human thought, traveling salseman, heirarchical optimization problems, word understanding, etc. also known as engineering2; and that regarding (1) and (2), that (1) is superior to (2), when (1) already exists, allowing a clear cut analytics of intelligent design science, but also noting that when (1) and (2) are placed on a sufficiently fast and efficient medium for processing, that (1) forms the core of structured analytical simplicity design, and (2) forms the peripheral of unstructured non-analytical complexity design; and that when an analytic (1) can be derived, that (2) always increase the mass of (1) *science*, and when an analytic (1) can not be derived, that (2) always reigns in those problems over structured analytical (1).

Some Christians adhere to purely intelligent-deisgn1 ideology, and will get always get resistance as being counter-productive, stopping all thought and research involving engineering2, just because it is only God who can do those things, and so therfore man cannot even comprehend these subjects.

So the despondent employer will turn to a Scientific Christian who holds an intelligent-design2 with its dual engineering1 and engineering2 components in rightly divided capability, or daresay an Athiest who (unoptimally) only believes in engineering2, who will both be more than happy to take those high paying research or military application jobs, for working in extremely complex engineering2 concepts to derive engineering1 on the edges of the methodology of science

These are jobs usurped from an intelligent-deisgn1 Christian, who says, that "it is impossible", "it will never work!", and "there is no analytical methods for that either, therefore God has to help us engineer this!". They are not wiser than the serpent, and give Scientific Cristians a bad connotation by association, causing sociological backlashes in science, equal to the Creationist-Intelligen-design1, know-nothing attitudes.

1Cor2 "11 Lest Satan should get an advantage of us: for we are not ignorant of his devices."

Intelligen-design2 without engineering2 is lame to complexity, and evolution without Intelligent-design2(1)-engineering1 is blind to spiritual information analytic matters.

I would dare to define:

engineering3 =
(0) God's-material-universe +
(1) intelligent-design-analytics-science-knowledge-base(engineering1) +
(2) brute-force-evolution-compexity-analysis(engineering2).



LOO:This is another concept that seems quite useful to me, but something to explain with an example. Many people can explain Einstein's theory(s) of relativity. We can also cite the examples that Einstein said inspired him. No one, however, can identify and quantify the exact set of mental steps which allowed Einstein to formulate the theories of relativity, including Einstein himself. This is one of the key features of true genius.

So it brings up the question: Can we possibly scientifically analyze intelligent design, when we have no idea of what the details of the process?

LRD:This is an example where evolution theory can be applied, as the brute force examination of a nearly infinite number of process configurations and adaptations, to select the 0.0001% of the unknown highly complex models that can provide a backbone of highly fit processing units. And then analytics can take over and look for patterns of those useful templates that evolution methods found, like men being inspired by natural designs, and turning evolution's fruits into intelligent-design-engineering1 core analytics, when analytical in simplicity.



LOO:Another example: When I was young, I loved math and could easily give answers to problems that the teacher put on the board with a glance. Unfortunately, I usually couldn't explain how I got the answer since so many steps were done subconsciously. Later when I was tutoring math and physics in college, the need to analyze individual steps became critical and I could understand better why people viewed certain problems as being complex which seemed like no-brainers before.

LRD:That is a good example of how one's thoughts can start out evolutionary to a library of atomic inner thoughts, that slowly becomes intelligent-design-engineering1 analytics, as knowledge and structure is acquired over time. Studying and memorizing and medditating for 20 years now on historical and current Christianity, religions, physics, maths, codes, circuits, systems, languages, words, perceptions, ideas, and thoughts, I actually have a good grasp describing the complexities of thoughts and cybernetics. Unfortunately, a lot of it requires a number of pictures to convey the ideas, that a few words tend to mess up more than refine. We seem to be similar in our highly intuitive nature toward things technical, that most people have to "take on faith", and that is for pure analytical methods of science and math.



LOO:It is also seems to be a problem when intelligent people try to argue against ID in something or another: So many amazingly complex steps can be done in the human brain that we are frequently lulled into thinking no intelligence was required at all to accomplish a particular task. When we try to replicate it in a computer program and then attempt to use it in the real world, a large set of problems are frequently encountered that we hadn't foreseen.

So what do you think, is it possible that brilliant people can claim ID wasn't needed, when in fact ID was the key to success?

LRD:I cannot argue against intelligent deisgn proper, as it is like cutting one of one's legs. And I can say, yes, otherwise young intelligent people argue against intelligent design, kicking against the pricks:

Matthew 7 "15 Beware of false prophets, which come to you in sheep's clothing, but inwardly they are ravening wolves. 16 Ye shall know them by their fruits. Do men gather grapes of thorns, or figs of thistles? 17 Even so every good tree bringeth forth good fruit; but a corrupt tree bringeth forth evil fruit. 18 A good tree cannot bring forth evil fruit, neither can a corrupt tree bring forth good fruit. 19 Every tree that bringeth not forth good fruit is hewn down, and cast into the fire. 20 Wherefore by their fruits ye shall know them. 21 Not every one that saith unto me, Lord, Lord, shall enter into the kingdom of heaven; but he that doeth the will of my Father which is in heaven. 22 Many will say to me in that day, Lord, Lord, have we not prophesied in thy name? and in thy name have cast out devils? and in thy name done many wonderful works? 23 And then will I profess unto them, I never knew you: depart from me, ye that work iniquity."

But I can also say, in near-mirror-symmentry that:

It is possible that brilliant people can claim [brute force evolution] wasn't needed, when in fact, [brute force evolution] was [A] key to success.

A key, I carefully add, for I believe evolution AND intelligent design, are both required, and that either God crafted a world showcasing the wonders of evolution ideas, or that nature evolved naturally after God created it, for the most part, and ID arose to collect a growing mass of knwoeldge and evolutionary treasures. And that it is hard to tell, looking at the world from 1 second after the Big Bang, whether God had a hand in everything subsequent, but kept it all secretly evolutionary natural, or that God had little hand in everything since the universe's creation and so everything looks natural, naturally. And not to say that evolution doesn't wane or change in time as ID grows, but that evolution never disappears in man's hands, because we never reach the pinnacle of God, we can only approach infinity from our zero location, keeping all ID, and using evolution for all exploration, and ID for pruning evolution.

Proverbs 1 "22 How long, ye simple ones, will ye love simplicity? and the scorners delight in their scorning, and fools hate knowledge? 23 Turn you at my reproof: behold, I will pour out my spirit unto you, I will make known my words unto you. 24 Because I have called, and ye refused; I have stretched out my hand, and no man regarded; 25 But ye have set at nought all my counsel, and would none of my reproof: 26 I also will laugh at your calamity; I will mock when your fear cometh; 27 When your fear cometh as desolation, and your destruction cometh as a whirlwind; when distress and anguish cometh upon you. 28 Then shall they call upon me, but I will not answer; they shall seek me early, but they shall not find me: 29 For that they hated knowledge, and did not choose the fear of the LORD: 30 They would none of my counsel: they despised all my reproof. 31 Therefore shall they eat of the fruit of their own way, and be filled with their own devices. 32 For the turning away of the simple shall slay them, and the prosperity of fools shall destroy them. 33 But whoso hearkeneth unto me shall dwell safely, and shall be quiet from fear of evil."

Now taking your point of current AI software tools, the problem, is that the systems one often work with, are still too simple, like a young child or a babay, who cannot step back out of the system at a higher level, so that the unforseen problems can fold themselves into the core problem set. A wonderful task of abstraction that mature humans are keen at, and, admittedly, modern computers still haven't captured, despite human intelligent design to capture that, with all hopes and intention, because it is so large and hard, but not impossible.

Kind of unrelated, but fascinating, if you like to read of a computer system I worked on that has total physics free-will properties, I have an interesting project writeup at:

http://lonerubberdragon.blogspot.com/#S10

LoneRubberDragon said...

Oh, and if you want a good example of evolution creeping into the most analytical attempts at perfection, try writing more than 100 lines of perfect complex code in the first place, and see how one sometimes has to evolve the design through adaptations and corrections to reach the final desired code.

Code is complex, and so evolution and modifications are often par for the course, despite how analytical one's approach may be.

Looney said...

LRD, today I made some changes to my coding to get some more capabilities running in parallel. I suppose this would be considered an "evolutionary" step, but it took changes to about a dozen lines of code scattered around perhaps 250,000 lines of coding. The coding wouldn't work until about 100 keystrokes were done correctly and simultaneously which allowed me to jump from functioning code condition A to functioning code condition B.

This is the sort of "evolution" that I have been professionally doing on a daily basis for almost three decades. Because there is no viable state between code condition A and code condition B, there is no evolutionary path from A to B other than a single mutation in which a large number of key strokes are correctly placed throughout the program.

For these kinds of problems, engineering1 is the only solution. Engineering2 still seems to me something that only exists in fantasy land. I will be proven wrong when somebody puts me out of business using engineering2, but I anticipate I will die first or be put out of business by something else!

LoneRubberDragon said...

I know exactly what you're doing! That is actually a very good argument from the side of irreducible complexity ID issues that I can say that I honestly grapple with.

But I can guess from your software description that )a( new function(s) with different functionality was introduced, causing simultaneous rippled changes to necessarily be made in those hundred odd locations.

I would suggest that in an ideal evolutionary system "code", more conservative methods are used to alter code from stateA to stateB, by a set of transitions in the introduced module and fewer spots in code, due to higher levels of heirarchical isolation. In that sense DNA code is different from human code in structural functional analysis.

Evolution would never introduce a new functionality all at once, and also forcing a simultanous necessary change of 100 sites in code. Only intelligent design analytics can perform that delicate level of code surgery, like a human, so it is not the strongest analogy, but still a good argument of irreducible complexity issues evoltuion+ID algorithms must address to make thinking machines.

Additionally, evolutionary systems deal with highly heirarchical organized analog systems that can be served by gradient descent algorithms from stateA to stateB in generations of fitness, e.g. multiplying a code copy N times had f(N) proportionality control. Additionally, analog systems can make minor changes to few key and sparsely distributed parallel locations for systemic changes at once. For example Michael Behe in "Darwin's Black Box", pages 40-41, notes that a single spot gene change produces a system level change, making fruit fly antennae into legs, evidence of spot-heirarchical system control.

On the other hand, computer code is dense efficient, and all digital, and so runs in a more delicate state of affairs where one module change can necessitate an absolute requirement for simultanous accompanying digital changes to code to get from one known working state to another known working state, with no working states in between. If the code were abstractly tagged, a new module functionality could have been introduced, and all of its calling instances would have changed simultaneously, but you probably have information from that function bleeding into different code paritions, requiring the necessary 100 changes at once. Perhaps careful judicious encapsulation could help you reduce code editing times, much like evolutionary design use.

So for digital code, irreducible complexity and one machine one man, requires us to perform some degree of the evolutionary and intelligent design analysis in our head to make the right change, where our mind is the medium of ID and evolution, both. If you had 100,000 AI manged copies of that program that could be evolved *and* use ID restrictions of heuristic changes, one might be able to derive+evolve the same thing in a short time, but the model today is one computer one man. Therefore, your refined intelligent design and evolutionary analysis of required change location selections job is quite safe for decades.

But a machine that can someday absorb the characteristics of an entire project, and run ID analysis and hueristic evolutionary variations to find the key parameters involved with a change from A to B, that it will give us a run for our money, unless we are the ones who write AI softwares. And if not one because one never thinks it will work, then another country where they may be more open to both ID and evolution algorithms, e.g. Japan, China, India?

================

I thought I'd bounce a definite ID analytic with you to see what you know of this analytic "formulation".

You mentioned the Taylor series, as an analog to analytic solutions to ID analytics problems, in increasing approximation of degreed terms. I wonder if you've ever heard of an analytic mathematical space, that I will describe.

For background, last year I was thinking Greek in math spaces, and came across an elegant analytical vector space. Imagine a space of 1 to N dimensions in size, corresponding to a relationship of input variables to that space, such that, for example, for:
N=3,
with input variables to a function of:
(X,Y,Z),
that they relate to the space of:
(x,y,z)
by:
first_f(x,y,z) = X^x*Y^y*Z^z
at all points of the space
(x,y,z).
So, for example, at
(x,y,z) = (1,2,3),
the relationship in this analytic space is:
first_f(x,y,z)=f(1,2,3)=(X^1*Y^2*Z^3).


After the space, e.g.,
(x,y,z),
for
N=3,
is defined in its relationship to input variables,
(X,Y,Z),
one now adds weighted dirac deltas or "samplers" to the space at select points of
(x,y,z), like:
1*dirac
at
(2,0,0), (0,2,0), (0,0,2),
and also one adds a second general function that can be placed around the space,
second_f(R^N) = f(volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)),
like:
second_f(R^N) = (volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)) ^ 0.5,
which in this particluar exaple yields:
second_f(R^N) = (X^2 + Y^2 + Z^2) ^ 0.5,
which, as you may well recognize, is the distance measure of the point,
(X,Y,Z),
to,
(0,0,0).


Now the elegance of the vector space is shown when you examine many geometric equations, within this framework, in parallel equivalent notation:
(0) distance of point, for N=3:
weighted_dirac_deltas = {1@{(2,0,0), (0,2,0), (0,0,2)} (deltas on a plane)
Dist = (volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)) ^ 0.5,

(1) volume of cube, for N=3:
weighted_dirac_deltas = {1@(1,1,1)}
Dist = (volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)) ^ 1.0,

(2) perimeter of triangle, for N=3:
weighted_dirac_deltas = {1@{(1,0,0), (0,1,0), (0,0,1)} (deltas on a plane)
Perim = (volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)) ^ 1.0,

(3) area of triangle, for N=3:
weighted_dirac_deltas = {v1@{(4,0,0), (0,4,0), (0,0,4)}, v2@{(3,1,0), (1,3,0), (0,3,1), (0,1,3), (1,0,3), (3,0,1)}, v3@{(2,2,0), (0,2,2), (2,0,2)}, v4@{(2,1,1), (1,2,1), (1,1,2)}} (deltas on a plane)
Area = ((1/16)volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)) ^ 0.5,

(4) area of radian spherical triangle of radius R, for N=3:
weighted_dirac_deltas = {-pi@(0,0,0), 1@{(1,0,0), (0,1,0), (0,0,1)} (deltas on a plane)}
Area = ((1/R^2)volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)) ^ 1.0,

(5) radius of inscribed circle, for N=3:
weighted_dirac_deltas1 = {v1@{(4,0,0), (0,4,0), (0,0,4)}, v2@{(3,1,0), (1,3,0), (0,3,1), (0,1,3), (1,0,3), (3,0,1)}, v3@{(2,2,0), (0,2,2), (2,0,2)}, v4@{(2,1,1), (1,2,1), (1,1,2)}} (deltas on a plane)
weighted_dirac_deltas2 = {1@{(1,0,0), (0,1,0), (0,0,1)} (deltas on a plane)
RadInsc = ((1/16)volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas1(x,y,z) dx dy dz)) ^ 0.5 *
((1/2)volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas2(x,y,z) dx dy dz)) ^ -1.0,

(6) radius of circumscribed circle, for N=3:
weighted_dirac_deltas1 = {v1@{(4,0,0), (0,4,0), (0,0,4)}, v2@{(3,1,0), (1,3,0), (0,3,1), (0,1,3), (1,0,3), (3,0,1)}, v3@{(2,2,0), (0,2,2), (2,0,2)}, v4@{(2,1,1), (1,2,1), (1,1,2)}} (deltas on a plane)
weighted_dirac_deltas2 = {1@{(1,1,1)}
RadCircum = ((1/16)volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas1(x,y,z) dx dy dz)) ^ -0.5 *
(volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas2(x,y,z) dx dy dz)) ^ 1.0,

(7) sine(X) taylor series, for N=1:
weighted_dirac_deltas = {1@(1), -1/3!@(3), 1/5!@(5), -1/7!@(7) ...} (deltas on a line)
SineTaylor = (volume_integral_over(first_f(x) * weighted_dirac_deltas(x) dx)) ^ 1.0,

(8) cosine(X) taylor series, for N=1:
weighted_dirac_deltas = {1@(0), -1/2!@(2), 1/4!@(4), -1/6!@(6) ...} (deltas on a line)
CosineTaylor = (volume_integral_over(first_f(x) * weighted_dirac_deltas(x) dx)) ^ 1.0,

(9) tangent(X) taylor series, for N=1:
weighted_dirac_deltas = {1@(1), 1/3@(3), 2/15@(5), ...} (deltas on a line)
TangentTaylor = (volume_integral_over(first_f(x) * weighted_dirac_deltas(x) dx)) ^ 1.0,

(10) exponent(X) taylor series, for N=1:
weighted_dirac_deltas = {1@(0), 1/1!@(1), 1/2!@(2), 1/3!@(3), 1/4!@(4) ...} (deltas on a line)
ExponentTaylor = (volume_integral_over(first_f(x) * weighted_dirac_deltas(x) dx)) ^ 1.0,

(11) exp(-1/X^2) laurent series, for N=1:
weighted_dirac_deltas = {... 1/(-2!)@(-2), -1/(-1!)@(-1), 1/0!@(0), -1/1!@(1), 1/2!@(2) ...} (deltas on a line)
Exp(-1/x^2)Laurent = (volume_integral_over(first_f(x) * weighted_dirac_deltas(x) dx)) ^ 1.0,

(12) 1/(X^3(1-X)) laurent series, for N=1:
weighted_dirac_deltas = {1@{(-3), (-2), (-1), (0), (1), (2), ...}} (deltas on a line)
1/(X^3(1-X))Laurent = (volume_integral_over(first_f(x) * weighted_dirac_deltas(x) dx)) ^ 1.0.

(13) linear affine transform of (X,Y,Z) coordinates, for N=3:
weighted_dirac_deltas1 = {v1@(1,0,0), v2@(0,1,0), v3@(0,0,1)} (deltas on a plane)
weighted_dirac_deltas2 = {v4@(1,0,0), v5@(0,1,0), v6@(0,0,1)} (deltas on a plane)
weighted_dirac_deltas3 = {v7@(1,0,0), v8@(0,1,0), v9@(0,0,1)} (deltas on a plane)
Affine(X,Y,Z) = ((volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas1(x,y,z) dx dy dz)) ^ 1.0,
(volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas2(x,y,z) dx dy dz)) ^ 1.0,
(volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas3(x,y,z) dx dy dz)) ^ 1.0),

(14) second order affine transform of (X,Y,Z) coordinates, for N=3:
weighted_dirac_deltas1 = {v1@(1,0,0), v2@(0,1,0), v3@(0,0,1), v4@(2,0,0), v5@(1,1,0), v6@(0,2,0), v7@(0,1,1), v8@(0,0,2), v9@(1,0,1)}
weighted_dirac_deltas2 = {v10@(1,0,0), v11@(0,1,0), v12@(0,0,1), v13@(2,0,0), v14@(1,1,0), v15@(0,2,0), v16@(0,1,1), v17@(0,0,2), v18@(1,0,1)}
weighted_dirac_deltas3 = {v19@(1,0,0), v20@(0,1,0), v21@(0,0,1), v22@(2,0,0), v23@(1,1,0), v24@(0,2,0), v25@(0,1,1), v26@(0,0,2), v27@(1,0,1)}
Affine(X',Y',Z') = (volume_integral_over(first_f(x,y,z) * weighted_direc_deltas1(x,y,z) dx dy dz)) ^ 1.0,
volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas2(x,y,z) dx dy dz)) ^ 1.0,
volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas3(x,y,z) dx dy dz)) ^ 1.0),

(15) multiplication of two complex numbers, for N=4:
weighted_dirac_deltas1 = {1@(1,0,1,0), -1@(0,1,0,1)} (deltas on a plane)
weighted_dirac_deltas2 = {1@{(1,0,0,1), (0,1,1,0)}} (deltas on a plane)
ComplexMult(Re,Im) = (volume_integral_over(first_f(w,x,y,z) * weighted_dirac_deltas1(w,x,y,z) dw dx dy dz)) ^ 1.0,
volume_integral_over(first_f(w,x,y,z) * weighted_dirac_deltas2(w,x,y,z) dw dx dy dz)) ^ 1.0)

By stepping outside of the system one level, and making a higher geometry formulation, arranged in sets and simpler operations, one can, encapsulate in this analytic space formulation, numerous geometry equations, taylor series, by implication mclauarin series, laurent series, affine transforms, complex math, and likely numerous other multivariable polynomial power equations. Also, many of the equations show compact systematic natures, occuring, for many of these examples, on sets of weighted_dirac_delta planes and/or lines. These examples also remind me of the analytic versions of single layer neural networks.

Do you know what this vector space is called, from ID analytic methods? I have not been able to name this system myself in research?

LoneRubberDragon said...

NOTE: I did play a little loose, saying it required 100 places in code to make the change from stateA to stateB, to more effectively state the requirements for intelligent design. Claiming that the 100s of keystrokes is required in exact sequence, is misleading and misunderstood, because the compiler turns all of those strings into only a few compiler tokens and stack handling descriptors for ptototypes. The 100s of keystrokes are, in reality, only a *few* (addmittedly precise) token changes, but one doesn't ever get to control code in a token level editor, but invariably in a character level editor, so the exaggeration is a little false to the cybernetic code structures.

Looney said...

I will need to look at your analytical space some more.

Regarding irreducible complexity, the fact that this exists in biology is a key to HIV treatment. HIV reproduces and mutates rapidly, so that it can quickly overcome any individual drug that is used to stop it. By using three or four individual drugs at the same time which latch onto the HIV virus at different points, the probabilities of having 3 or 4 simultaneous mutations at the exact points of the virus become astronomically small, even though the virus produces trillions of copies of itself.

Anyway, my experience with technology is one extreme: irreducible complexity on development after development every single day. The meta-narrative of biological evolution, at the very opposite extreme, requires that there be not one instance of irreducible complexity from a single standalone amino acid to any point along the postulated tree of life. This extreme point of view, however, is purely a hypothetical, which, as I have noted, is completely contrary to all of mankind's cumulative technological experience.

There is another philosophical point of interest: It seems that some people are so strongly viewing evolution as a fundamental paradigm, and have considered that human intelligence is necessarily a product of evolution, so that they have concluded that ID is actually a subset of non-ID evolution. In this way, ID can never be respected as an independent entity from non-ID evolution, so ID arguments against non-ID evolution are automatically dismissed as nonsense. What would you think of such a line of reasoning?

LoneRubberDragon said...

LRD:Upon further examination of the reformulation of numerous equations into a power space relation; it appears to be a multidimensional extension of the Taylor series that allows multivariate symmetry of coeficients in many geometrical and polynomial equations to be easily revealed and systematized, even allowing fractional polynomial powers to exist, unlike Taylor and Laurent series'. Kind of like Euler's Formula encapsulates exponentials and complex numbers in its formulation. But it doesn't seem to have a mathematiccal name.

LOO:Regarding irreducible complexity, the fact that this exists in biology is a key to HIV treatment. HIV reproduces and mutates rapidly, so that it can quickly overcome any individual drug that is used to stop it. By using three or four individual drugs at the same time which latch onto the HIV virus at different points, the probabilities of having 3 or 4 simultaneous mutations at the exact points of the virus become astronomically small, even though the virus produces trillions of copies of itself.

LRD:HIV not so much a case of irreducible complexity, than an example of the ease of systemic evolution evidence, because it easily mutates without God's help (we hope), to resist new syhtetic antibodies and chemicals that have never existed before in the world, which is an increase of new information and complexity from nothing that ever existed. Or else God programmed AIDS with enough variation that it could live into any future drug / antobody regeimen to the end of time, which sounds a lot like infinite evolution capability.

LOO:Anyway, my experience with technology is one extreme: irreducible complexity on development after development every single day. The meta-narrative of biological evolution, at the very opposite extreme, requires that there be not one instance of irreducible complexity from a single standalone amino acid to any point along the postulated tree of life. This extreme point of view, however, is purely a hypothetical, which, as I have noted, is completely contrary to all of mankind's cumulative technological experience.

LRD:In current engineering, I agree, it is all mostly ID with brute force, and refined hueristic mental search and identification operations going on all the time but not on paper or computer. But on the other hand, I can exemplify an apparent irreducible complexity that can arise by natural means. In combinatorial chemistry one can develop an irreducibly complex chain reaction in parallel simultaneous fasion. For example, imagine in a matrix of 100,000,000 possible reactions (much less than 10,000 mutually stable reactable molecule species are required) such that one can naturally see that A catalyzes B catalyzes C catalyzes D catalyzes A, and (A + photon) can expel, say, an e- or H+, which allows the assembly of a glucose from the < 10,000. It is essentially a metabolic photosynthetic pathway skeleton, of simultaneous complexity for a biologist to discover, but is only 5/100,000,000 of the perfectly unguided blind natural combinatorial chemical space. And it feeds itself in numbers through hypercyclic catalytic chain reactions in A, B, C, D. It would be only one of potentially numerous loops and networks of reactions, in the100,000,000 potential reactions. Now most of the reactions can be zero forward reactions, but the space allowed is so large (greater than squared in combinations) to intersect numerous useful reactions, of which, some support stable molecule diversification, e.g. E catalyzes a new molecule Molecule-10,001, and F catalyzes Molecule 10,002, etc., which causes the reaction matrix potential to grow. Here A~F are elements of the < 10,000.

ID has to analytically prove that in a plausible early ocean molecule diversity and space (volcanic vents, UV exposure, cold water, hot water, under rocks, next to metals and minerals, etc., that for 100, 1,000, 10,000, 100,000, etc. mutually stable chemicals that might exists in an early ocean, that the 10000+, 1,000,000+, 100,000,000+, 10,000,000,000+, etc. potential reactions all contain **zero new molecule forward reactions**, that is, which don't contribute in feedback to the complexity of the soup,. OR even worse for evolutionist (tragedy in fact), that any one of these points they *reduce* the complexity of the soup with a net negative of forward reactions, that is, they have numerous "breakdown" reverse reactions to the subset of original molecules, always emphatically and everywhere, bringing the steady state ocean to, say, only 100 stable free molecule species in reactive equilibrium. Such an extensive diverse chemical test would strongly disprove that life originated from nonliving matter, and KILL evolution, and PROVE God as the original ID force of the universe. But has it been performed?????

LOO:There is another philosophical point of interest: It seems that some people are so strongly viewing evolution as a fundamental paradigm, and have considered that human intelligence is necessarily a product of evolution, so that they have concluded that ID is actually a subset of non-ID evolution. In this way, ID can never be respected as an independent entity from non-ID evolution, so ID arguments against non-ID evolution are automatically dismissed as nonsense. What would you think of such a line of reasoning?

LRD:Well, if humans started with no knowledge, and fumbled around for centuries, developing ID knowledge in exploration, e.g. Greece and the Renaisance, one can argue that human ID does indeed spring forth as a product of the brutish evolutionary exploration of truth and mistakes. But once human ID analytics, "the methodology", is discovered, it always grows and IS science, and stands on its own, and grows over evolution except the edges of complexity and the unknown. Today, ID towers over some of evolution, by encapsulating evolution's old explorations in known ID analytical methods, but evolution still exists. Whether in software at the edge of (research or difficult) nearly solveable problems, or in the subconscious of human thought exploration ... evolution is still there. And yet, evolution always remains at its heart, the art of brute force exploration. And ID always remains the growing of science. One can even say that DNA, protein machineries, and codes, are the ID products of evolution discoveries, which are "science" and "technology" ID, just as much as human accumulated ID knowledge. And yet evolution in biology still operates today with variations and mutations, very visible in bacteria and small life much moreso than say mamals which don't evolve quite as much, and probably have semi-ID genetic machinery for semi-directed DNA gene crossover and directed "mutation" variation capability. But when humans begin to write their own DNA codes, then DNA becomes intelligent design completely, and my lifetime will probably see the advent of that small miracle.

So, in that sense evolution is not the science in my view, even if indespensible in our subconscious thinking, and through natural reproductive or signifigant variation exploration, or nature. And intelligent design is the ever gorwing and unmoveable foundation of truth and science and takes over evolution. And I would disagree with people like Dawkins and such, on their inversion of the concept that you mention above, and yet argue with Creationist disparaigment of evolution all together, as it can't be dispensed with, unless you claim that MAN already has the whole architecture to all Intelligent Design, and just keeps it mostly secret. But really, we don't know everything, I would surmise, which means that only God is the master of ID, but doesn't reveal it for Tower of Babel reasons.

I will still prefer the narrow middle road rarely taken ... the best of both worlds using extensive ID and evolution for the hard stuff beyond human ID capabilities. And I must remain agnostic in the sense of what level God is involved with in nature, but remain in awe of the Big Bang creation that started nature in the first place.

Here's a question ... of ID, why hasn't man achieved the capacity to make computers that think like humans? If ID is analytic and the key to solving all things, where is the smart computer? And claiming it is impossible, I don't consider a viable option, because wheres the "equation" proving computers can't be AI human smart. My thinking is that evolution of complex system thought models is required (but not yet accomplished) to study billions of code chunks and rank or rate {themes, methods, algorithms, processes, architectures}, in modes that are too numerous for analytics other than evolution algorithms, and beyond any one ID human to develop. CYC as an example is a nice concept, but is only one model of thought and information capture among thousands "discovered" by evolutionary biology. If life were so analytical, we should have computer tools as smart as savant humans, and general purpose computers as smart as a 19th century human. But we don't yet, and I don't think it is impossible, just difficult for the ID subset of 7 billion general humans.

LoneRubberDragon said...

LRD:To reiteratively encapsulate, Evolution appears to reign at the beginning of natural time, and continually prduces ID which infuses evolution with ever increasing ID power, until at the end of time where ID appears to reign in nearly all domains, except at the edge of chaos and complexity, where ID cannot tread, thus evolution is never completely dispensed with, unless life is incorruptibly changed into a state of maximal ID, and with all evolution excluded from that state in a perfect-accepted sea of ID glass, so to speak. But being in that state, means that exploring for new (unknown to the state) ID analytics is prevented because brute force searches would recall days of evolution in that perfect-accepted space. But if ID were infinitely catalogged and retreivable, so that no experimentation is ever required in time-space, then evolution is made dead to that incorruptible perfect-acceptable state sea-of-glass.

LoneRubberDragon said...

And if you think computers can't be given a soul, I would dare to relate these words in my favor from The Bible:

Luke 14 "40 And some of the Pharisees from among the multitude said unto him, Master, rebuke thy disciples. 40 And he answered and said unto them, I tell you that, if these should hold their peace, the stones would immediately cry out."

Luke 3 "7 Then said he to the multitude that came forth to be baptized of him, O generation of vipers, who hath warned you to flee from the wrath to come? 8 Bring forth therefore fruits worthy of repentance, and begin not to say within yourselves, We have Abraham to our father: for I say unto you, That God is able of these stones to raise up children unto Abraham. 9 And now also the axe is laid unto the root of the trees: every tree therefore which bringeth not forth good fruit is hewn down, and cast into the fire."

Matthew 3 "6 And were baptized of him in Jordan, confessing their sins. 7 But when he saw many of the Pharisees and Sadducees come to his baptism, he said unto them, O generation of vipers, who hath warned you to flee from the wrath to come? 8 Bring forth therefore fruits meet for repentance: 9 And think not to say within yourselves, We have Abraham to our father: for I say unto you, that God is able of these stones to raise up children unto Abraham. 10 And now also the axe is laid unto the root of the trees: therefore every tree which bringeth not forth good fruit is hewn down, and cast into the fire."

God can raise life from clay in Adam, and raise children from the very elements of the earth. Man can produce corruptible flesh, man may be able to make incorruptible computer bodies.

LoneRubberDragon said...

And moreso,

1 Peter 2 "4 To whom coming, as unto a living stone, disallowed indeed of men, but chosen of God, and precious, 5 Ye also, as lively stones, are built up a spiritual house, an holy priesthood, to offer up spiritual sacrifices, acceptable to God by Jesus Christ. 6 Wherefore also it is contained in the scripture, Behold, I lay in Sion a chief corner stone, elect, precious: and he that believeth on him shall not be confounded. 7 Unto you therefore which believe he is precious: but unto them which be disobedient, the stone which the builders disallowed, the same is made the head of the corner, 8 And a stone of stumbling, and a rock of offence, even to them which stumble at the word, being disobedient: whereunto also they were appointed."

Psalms 118 "16 The right hand of the LORD is exalted: the right hand of the LORD doeth valiantly. 17 I shall not die, but live, and declare the works of the LORD. 18 The LORD hath chastened me sore: but he hath not given me over unto death. 19 Open to me the gates of righteousness: I will go into them, and I will praise the LORD: 20 This gate of the LORD, into which the righteous shall enter. 21 I will praise thee: for thou hast heard me, and art become my salvation. 22 The stone which the builders refused is become the head stone of the corner. 23 This is the LORD's doing; it is marvellous in our eyes. 24 This is the day which the LORD hath made; we will rejoice and be glad in it. 25 Save now, I beseech thee, O LORD: O LORD, I beseech thee, send now prosperity. 26 Blessed be he that cometh in the name of the LORD: we have blessed you out of the house of the LORD."

Looney said...

"To reiteratively encapsulate, Evolution appears to reign at the beginning of natural time"

I think it would be more accurate to say " ... , Evolution 'is postulated' to reign at the beginning of natural time, due to prevailing theological theories of the late 18th century and early 19th, ..."

LoneRubberDragon said...

THE THINGS I DO SEE, do disappoint me, as you have noticed. As much as there are good words in science, they fall short.

Take this quote from Richard Dawking from "The God Delusion", page 35:

"An athiest in this sense of philosophical naturalist is somebody who believes there is nothing beyond the natural, physical world, no *super*natural creative intelligence lurking behind the observable universe, no soul that outlasts the body and no miracles - except in the sense of natural phenomena that we don't yet understand."

And Dawkins asks on page 404, "to give life meaning and a point ... Is it a similar infantilism that really lies behind the 'need' for a God?".

Well, if science now and forever by its adherent voices always will refuse to save the soul, the soul that is nothing, and that is non-existent, that we infants need a God to save us and even an openly and admittedly soul-less science. For in 1000 billion years, when all stars have died, and all life everywhere is dead and frozen in the ashes of the galaxies, than what is the meaning of the quintillions of universal lives, or what was the exact purpose of a currently dead and frozen cold science?

IT IS A DISAPPOINTMENT IN SCIENCE ADVOCACY. They are an entity that will *forever* deny the ability of saving a soul beyond the body, for there is no soul to save, in science, we are all just dead animate matter for the moments. Quantum physics will never explain why wave functions collapse (and uncollapse in quantum eraser experiments) in this universe, based in the immaterial-untouchable-spiritual-structural-analytical-informational-configurations of the material universe causing a transcendental wave function collapse / uncollapse infinitely faster than the speed of light. Science will always fall short for now and forever, as they never admit the soul into science. Without God, we are all dead men walking ... what is the point and meaning, then, I ASK?

Looney said...

LRD, I think you are right about the dilemma of the evolution meta-narrative being reconciled with Christianity. As you notice, I won't refer to this as 'science', since this is really a brand of intellectual philosophy (anti-theism) that was attached to science 50 years before Darwin wrote his book - and was the dominant university philosophy by the mid-19th century. Thus, as I see it, science was systematically corrupted by anti-theism in the 19th and early 20th century. It should be quite clear why this leads to something rather depressing when we try to view Christianity through the lens of a corrupted science.

Another little point: Most of biological teaching and academic research today is derived from the Dobzhanksky Principle: "Nothing in biology can be understood, except in the light of evolution."

There is a competing principle, that isn't nearly so famous, which we can call the Looney Principle: "Every application of biology can be fully mastered by someone who rejects evolution." Thus, I know creationists who are successful medical doctors, cancer researchers, gardeners, you name it - even public school biology teachers, but are required to teach evolution.

Those two principles seem to me to be completely opposite extremes. Can they both be accurate? Or would you propose a third?

LoneRubberDragon said...

Seeing your reading of Titus Livius as such, I think you would enjoy listening to Gene Scott on the web, as He delves into God and the history of men reducing God into their traditions, or even dispensing with God altogether, as the Athiest Scientists, that I'll say are not "loose" with, but are just more prone to (great whore Babylon pun intended).

I will still continue to respectfully agree to disagree, to propose a middle path, where evolution and science are both required, and work hand in hand, most synergistically, with evolution solving the unsolveables, and science taking everything good generated into its increasing power. Neither are meta-narrative, and both are necessary.

Reiterating, Science without evolution is lame, and evolution without science is blind.

Brute force, for men, is the evolution of ways, and it is what makes science sprout by finding the rules of the patterns never enumerated before. Science then works itself into ever increasing power, through decreasing evolution needs, until only the remnants of NP-incomplete problems remain unsolved for science.

http://en.wikipedia.org/wiki/NP_%28complexity%29

That which God so easily just considers in His Mind, and then can execute them, "I AM THAT I AM", but thought was required at some point to accomplish Knowledge.

The specific science of modern ilk, that would forever deny there is a soul for salvation, are a bastardization of the true enlightenment of truth.

Likewise, an ID Creation that serves up a literalist 6000 year earth, as some do fervently, with (in my opinion) a deceptive God planting fossils and galaxies millions of light years away, for all the world looking like nature from an initial creation by God, and who will never consider the math of evolution, for, say, terraforming other planets, with a thourough understanding of systemic evolution that will not corrupt an intelligently designed microbe, are also useless to the true enlightenment of truth, beyond milk engineering. This is not directed at you, but those even more polarized in their pendulum of thought, more like a reed shaking in the wind, turned this way and that, to and fro.

I as a Christian Scientist, to call evolution only meta-narrative, is negative, though not as erroneously negatios as an Athiest Scientist calling Intelligent Design Proper, as non-existent, and shooting themselves in the foot at comprehending the science that a natural evolution can generate at the edge of the unknown.

AND I ASK YOU: if analytics of ID are the only key to success, with perfect avoidance of all evolutionary-esque brute force utility explorations, then tell me right now, **what are the methods of writing a "human thinking machine"**??? I say that YOU have no adequate answer to this; for if ID worked well-true in man's hands, man's hands would do little-work anywhere in the world. But then, man has never been the most intelligent of designs, sardonically speaking.



By the way, did you recall any engineering or ID reference to the multivariable multidimensional real-valued Taylor series generalization space? I swear, all of the college and engineering work I have processed, and I have never run across such a space, as a named methodology. I remember reading something vague about a power vector space in the late 1990's, but I have not found a good reference on the net since then. I see that multiplying functions is just dirac delta convolution, and differentiation and integration just shifts dirac deltas according to polynomial rules. I get the gut feeling it is analogous to Laplace transforms in polynomial forms, as N dimensional Taylor series are analogous.

Looney said...

LRD, thanks again for the input.

Just a bit more on why I called evolution a meta-narrative. A theory, as I see it, always consists of measurable quantities and fixed relations. Thus, I know the equations V=IR, F=MA, ..., but evolution has no equation. On the other hand, a molecular biologist will use numerous equations from many fields, draw a circle around the entire derivation, and then call it 'evolution'. This is something else, but can never be a theory. Meta-narrative is what I choose, although some others have used the term "framework".

Now I am curious what you would think of another assertion: Because an evolution scenario can be built up from any combination of sub-theories, and there is no need to have any consistency from one evolution scenario to the next, we can also say that evolution is a "tautology". Evolution can explain any data set, even if the data set is fictitious and inconsistent. Furthermore, we see quite mediocre biologists explaining phenomena that is more complex than integrated circuits where the vast majority of the data is lost - and this with no difficulty at all. Is this truly because of the explanatory powers of evolution? Or perhaps the meaning of the word "explain" has simply been debased. My claim is that evolution is a powerful paradigm for science in the exact sense that the one word answer "Because!" is a useful paradigm for answering all questions that begin with "Why ...?".

"AND I ASK YOU: if analytics of ID are the only key to success, with perfect avoidance of all evolutionary-esque brute force utility explorations, then tell me right now, **what are the methods of writing a "human thinking machine"**??? I say that YOU have no adequate answer to this; for if ID worked well-true in man's hands, man's hands would do little-work anywhere in the world. But then, man has never been the most intelligent of designs, sardonically speaking."

The answer to this is I don't know. Furthermore, I don't believe any human has a clue - nor will they during our life times. The intelligence in the human mind is simply unfathomable to humans, which precludes ID ever becoming subordinate to science. Science will always be the slave of ID.

My main reference with regard to optimization is a book by Scales, "Introduction to non-linear optimization". Many other classical optimization texts proceed this way, but it might not be immediately obvious how they derive from a Taylor series expansion in n-dimensions. This is my daily work in non-linear mechanics, but there aren't many who study it these days. The enthusiasm for GA methods has caused a great decline in familiarity with classical optimization and GA simply does not require the mental discipline. For me, this cuts two ways since I can solve things that the newbies can't using GA, but when the newbies control my budget it can be quite a headache trying to explain to them why a simple quasi-Newton method will give a much more accurate answer a billion times faster than GA.

LoneRubberDragon said...

LOO:Just a bit more on why I called evolution a meta-narrative. A theory, as I see it, always consists of measurable quantities and fixed relations. Thus, I know the equations V=IR, F=MA, ..., but evolution has no equation. On the other hand, a molecular biologist will use numerous equations from many fields, draw a circle around the entire derivation, and then call it 'evolution'. This is something else, but can never be a theory. Meta-narrative is what I choose, although some others have used the term "framework".

LRD:You have skillfully exonerated your concept of "evolution as meta narrative" most clearly here. I can wholly agree with the terminology used here, and I find "framework" a more philosophically positive term. I do agree; that it is true that evolution is always an art, and not a science, in the manner you detail. In my own private theories of natural life evolution. This is the weakness and strength of the evolutionary "paradigm", to use another term, slightly more polarized, and yet positive. Evolution, as a blind watch maker, uses some form of natural building blocks of a medium, in great parallel numbers and combinations, where it explores everything realistically possible in finite time, with no preconceived notions of how things ought to be used. That is, it has no artificial limits that a limited human intelligent designer might impose on a system, from their position of finite analytic method knowledge paradigm, no matter how well read in the analytics of men. And it is true, that evolution, per-se, has no equations other than brute force posssible combinatorics, where monads (interactive molecules) that bond (combine) easily are the ones that get used, over less reactive monads, and monads in a system that reproduce or catalyze the best, become masters over less useful reaction systems. That is an element of the growing science.




LOO:Now I am curious what you would think of another assertion: Because an evolution scenario can be built up from any combination of sub-theories, and there is no need to have any consistency from one evolution scenario to the next, we can also say that evolution is a "tautology". Evolution can explain any data set, even if the data set is fictitious and inconsistent. Furthermore, we see quite mediocre biologists explaining phenomena that is more complex than integrated circuits where the vast majority of the data is lost - and this with no difficulty at all. Is this truly because of the explanatory powers of evolution? Or perhaps the meaning of the word "explain" has simply been debased. My claim is that evolution is a powerful paradigm for science in the exact sense that the one word answer "Because!" is a useful paradigm for answering all questions that begin with "Why ...?".

LRD:Again, you have crystalized some of the aspects of finite human evolutionary issues in the face of intelligent analytics, often overstating their hypothesis as facts. But they are still hypothesis rooted in the paradigm. The natural world comes with very few analytic rules for operation. Preconceived notions arising from a finite intelligent design paradigm, hold back truths in complexity, and are an enemy to natural evolution. So it is natural that evolution theories will cover a variaety of conflicting ideas, but are natural in that as time goes on, the numerous theories will be proven or disproven with greater understanding. To disallow a theory in a finite state of evolutionary knowledge, is to withhold the potential of a truth, or falsifyable. It is to judge a person for the rest of their life, by their childhood evaluation as "inadequate", "conflicting", and "incompletely rooted in the foundation of truth". It would be to throw away the luminiferous aether concept, which was disproven by one of their propoents, and shows truth by negation, in a field of uncertainty, in finite human understanding of analytics.

LRD:Evolution, in intelligent design analytics, was very strong in the late 1800 to early 1900 time frame, where numerous theories, based on, to greater and lesser extents, physical analytic derivations and "blind postulates from the air", tried to explain the precession of mercury's orbit, and the speed of light. Analytics was relatively primitive, so numerous evolved ID analytic proofs of why mercury precesses were presented. They were not analytic, because if they were, they would all have proven true. Many proved to be only aspects of the truth, like the Lorent's equation, and many more proved to be local minima in the finite analytics of intelligent design. But it was only after hundreds of equations mutated and flourished in the evolutionary soup, some equations of Lorentzian character, that Einstein finally came along, and totally did an evolutionary-revolutionary idea combination by unifying mass, velocity, and time, that was never quite dreamed of before, in such a combination. Was it analytical, when it was at the edges of the map of the known world? Was it evolutionary, by combining ideas blindly, in the depths of the unknown regions of the map? It was made with no hard status quo ID preconceived notions, except for the atoms of ideas existing in the soup of the early relativistic physics. It was a deep connection, and unlikely to be derived by systematics proveable intelligent design methods, because of the simple fact, that there was no real intelligence extant in these far uncharted waters. So analytics, and evolution must work hand in hand in hand. And what evolution makes, analytics then comes along and gather the truest parts, to build analytics even greater, and yet still finite. If it remains finite, then evolution is not dead. But for evolution to claim that the intelligent design is dead, is even more erroneous extrapolation of what is truth, to be made by those few proponents, that are dyed in that wool over their own eyes. The warfare between science and religion, is mostly a war of wrongly dividing the word of one truth, which is why I adhere to the middle path, without sufficient proof of one being absolutely superior to the other.

LRD:In another example, Friedrich August Kekule' von Stradonitz, apocryphally derived the structure of Benzene in a dream, with linking monkeys, with no formal intelligent design analytics, but only having the monads of carbon and hydrogen floating around in his mind, in countless combinations, lacking a computer to rigourously go though every blind combination, which shows that the dreams rooted in a brute force evolution, without a strict ID guide, can be fruitful, to generating analytics. And once generated, the analytics become the science, from the art.

LRD:And, take navigating a freeway, which has some analytic processes in the local finite intelligence domain, You can plan ahead moves to some degree, and execute them with infinite precision, but if a vehicles suddenly, and non-analytically BY ANY MEANS, changes lanes, you must recompute the path in a domain you hadn't considered. Then you may put a camera in the sky, and predict all of the potential changes in an ever branching set of predicitons, but analytics will reach a finite serial capability of seeing into the future, with all possible branches being intelligently analytically computed. You may zoom through the traffic, with a carefully selected path with numerous alternate paths that will get you the furthest, no matter how the unanalytical unknowns change. But even here, nature pushes you unanalytically, and ID cannot surpass the limitations of being forced into specific configuration paths, especially in the most unanalytical of *accidents*. Amazingly, evolutionary algorithms are stuck in the same boat, here, as they can only do the same. At the point of chaos potentials, analytics and evolution are one in the same. Both exist at the limits of the non-formal-equation nature of being, and both can see all that can be seen, to the limits of non-analytical, unknown, nature.

LRD:Finally, an example can be made of true art. It is evolutionary all the way, and it always grows to explore things that hadn't been conceived of before. There are no euqations, but only rules of thumb, hazy analogies, random mutational serendipity. Science and intelligent deisgn, in this sense, will always be detatched from the arts, as unfathomable, as much as consciousness below, is found to be unfathomable, byt science desiring an analytic equation, where only a system and gestalt will suffice, that is closer to a rhizome or an evolution, than an intelligent design.

LRD:I don't know if explain and tautology are applicable terms, either. Explaining human thought is unfathomable to some, and fathomable to others, but even explaining a simple 741 opamp is no trivial one liner. So calling out the "debasement" of "explaining", is a little strong, where system complexity is concerned. As compelxity increases, explanations grow to fill in the details of the heirarchical map of nature's inherent steps and levels of interactions in inherent systems. And a chemistry that finds its own way blindly, by natural reaction numbers measured by their own facility of occuring, may be "tautological" in sound, but it is the nature of three-dimensional monads. Inherent, is a much better word than tautological, which carries the connotation of fallaciousness in philosophy. Inherent is gravity's equation. Inherent is chemistry reactions in a combinatorial matrix in feedback. Inherent is the electromagnetic equation. Inherent is the Schrodinger Equation. Inherent, is what it is. God is the I AM WHAT I AM. Is God a tautology, in your own words?. For inherent, is what, perhaps a most clever designer allowed??? But to call "inherent", a "tautology", though, is semantically-politically fallacious in its own right.




LRD1:"AND I ASK YOU: if analytics of ID are the only key to success, with perfect avoidance of all evolutionary-esque brute force utility explorations, then tell me right now, **what are the methods of writing a "human thinking machine"**??? I say that YOU have no adequate answer to this; for if ID worked well-true in man's hands, man's hands would do little-work anywhere in the world. But then, man has never been the most intelligent of designs, sardonically speaking."

LOO1:The answer to this is I don't know. Furthermore, I don't believe any human has a clue - nor will they during our life times. The intelligence in the human mind is simply unfathomable to humans, which precludes ID ever becoming subordinate to science. Science will always be the slave of ID.

LRD2:Well, at least you are predicable and honest in the prediction. It is interesting, though, as I have numerous finite inklings of how to make intelligent computers like a human. God can make children for Abraham, from living stones. To say it is unknown and unknowable in our generation, is pessimistic. It only takes a sharp sense of thought, as unanalytical as it may prove to be, with numerous local minima of software that don't capture the full essence, but with a potential that exists, because *we* exist. So unfathomable? Hardly so. The literature on AI is quite deep when you know where to look, with numerous crystals of pure glass, scattered about, like ideas on time, matter, energy, and light in the late 1800's period. And we stand at the threshold of understanding human consciousness, and machine thought, with computers that have already taken over 99% of all old school work, from Guttenburg to calculations to art, with analytic evolutionary grace. And the way you've rotated the words, perhaps you can say, science and evolution will always be the salves of ID, now that ID is extant.




LOO:My main reference with regard to optimization is a book by Scales, "Introduction to non-linear optimization". Many other classical optimization texts proceed this way, but it might not be immediately obvious how they derive from a Taylor series expansion in n-dimensions. This is my daily work in non-linear mechanics, but there aren't many who study it these days. The enthusiasm for GA methods has caused a great decline in familiarity with classical optimization and GA simply does not require the mental discipline. For me, this cuts two ways since I can solve things that the newbies can't using GA, but when the newbies control my budget it can be quite a headache trying to explain to them why a simple quasi-Newton method will give a much more accurate answer a billion times faster than GA.

LRD:This I can agree with. Formal analytics must be used where possible, when time is the essence. If it is more research oriented, they may be attempting to push the boundaries of the GA to make them "fit" as they are too "young". And I will admit as you have noted, that GA and such, are often misapplied outside of their semi-structured domains, that lie outside of the analytical methods. Though if they never get funding to explore nonlinear-GA, how will the GA ever learn its own domain? A field for their evolution must be set up, somehow, though not at the expense of also expanding formal nonlinear analysis. However, I must note, that though nonlinear optimization techniques are powerful for what they do, they also reach the point where structured-evolution or intelligent-brute-force is required, that fusion of ID and evolution, ala traffic school, so it must occur. Nonlinear, by it's very definition, is a space with an exponential branching factor in deep time predicions outide of the linearizable analytics of "neat" nonlinear optimization. At this point, is where genetic algorithms must come into play, using all of the atoms in the nonlinear analysis "book", and explore the monads, in chained and heirarchical systems of analytic non-linearity linearizations of time patches. This takes the best of both worlds. Formal analysis is pushed to its limits, and GA are used where they are best needed in algorithm characterization of appropriateness of application. And a billion times faster? Hmmmm. Don't know if that's a true to the essence description, politically speaking. It sounds more like poorly chosen monads in the GA, and a no-memory initial boundary condition, at the start of the algorithm. From scratch, a billion to one is not so bad for comparing a GA thought derivation machine, to an instinct machine optimized Newton Raphson method. Such a comparison is apples to oranges (no Newton puns, please). And a good GA, once it finds the solution, should keep the answer, and be comparable to the Newton Raphson analytic. I'm sure it took Newton a few months to derive the idea at first, too. And he was ID. But once developped, it becomes science. I'd have to hear more description of your observations on this example, to explicate your words on the GA NR comparison, in my mind.

LRD:I must get that book, though, to accompany my encyclopedia of differential equations! It sounds quite fascinating, especially if it is newer and more expansive in coverage, to the 1960's research on nonlinear equations. Although, it is a subject so old that Aleksandr Lyapunov did chaos research at the turn of AD 1900, and Leoardo di Vinci studied the non-analytic flow of billowing clouds and water turbulence. And I admit too, not too many people study nonlinear dynamics these days, which is a shame.

LRD:I do admit, though, that evolution can be at times, a political liturgy not convenient for a true scientist, which is why I secretly believe you are weilding the anti-spin sword against evolution, to equalize the local political climate, as much as I am weilding the anti-spin sword against a limiting ID only paradigm. To ignore burte force, and intelligently guided brute force, evolutionary methods, is to ignore an entire domain of processing that exists where unknowns dominate in depth or numbers. Imagine a DNA bacterial computer designed in our lifetime, intelligently, but evolving, to search all possible answers to a complex situational problem. No serial analytical intelligent algorithms alone, in finite time, could achieve such an answer, where massively parallel exploration CAN. It is a half gospel to ignore or denigrate evolution in the face of true intelligent designed sceince, as much as evolution tries to claim it is science alone, at the denigration of the human soul as nothing, and unfathomable to analytics. Either claim, rings hollow. Both must work together, in the ordinary finite knowledge time-space. And an intelligent science that ignored evolution, is like a science that denies the soul, or finds all of the arts and thoughts of man, as unfathomable. It leads to a sickly ID church, that is supposed to be of all the living words of truth, that cannot lead, as it is lame. Both must exist, in political and truth seeking harmony.

LRD:Now you have me curious, though. As an interested worker in nonlinearity problems, I would like to know what work do you do in nonlinear systems? The only major place I know of is the Santa Fe Institute in San Deigo, but that's a little distant from Freemont. A list of projects would suffice to familiarize me with your line of work. Systemic descriptions can give me deeper knowledge of some of your common problems.

LoneRubberDragon said...

Also, what are those books your reading from Seneca to Livius? What titles do you read that compile these works most efficiently and effectively? Are they all english translations, I hope, and not your own personal translation of the original latin? I can only currently cope with learning Chinese, Korean, Japanese, Russian, Spanish, Greek, and Hebrew, so I don't think I want to pick up latin right now, other than the anglicized versions for ease of reading.

Yeah, those writers, and the apostolic fathers gnostic writings, are a couple ancient blocks of writers that I've always wanted to read on, but could use a good book reccomendation. I've already got a good start on Buddhist and Confucian analects.

If you want a good bible site with multiple versions in multiple languages, and a good Greek word-translated Interlinear, you should visit any of the below inter-chained sites:

http://biblos.com/
http://multilingualbible.com/matthew/1-1.htm
http://bible.cc/matthew/1-1.htm
http://kingjbible.com/matthew/1.htm
http://nasb.scripturetext.com/genesis/1.htm
http://interlinear.biblos.com/

Looney said...

The books I have been reading are typically from Borders or Barnes & Nobles, although sometimes I order them through Amazon. It is more or less related to my Bible teaching at church. The recent trend is to try to establish the context of the Bible as much as possible by comparing and contrasting with adjacent cultures beginning with Ugarit and Gilgamesh and moving forward to Philo and the early church fathers. My view was that I should read as much literature as possible in translation and use commentaries as secondary, rather than primary study materials.

So some of the titles I am working on:

Seneca - Letters from a Stoic - Penguin Classics
Livy - The Early History of Rome - Barnes & Noble
Geza Vermes - The Dead Sea Scrolls in English - Penguin Press
Tacitus - The Annals & The Histories - The Modern Library Classics

On my reading list:

Xenophon - A History of My Times - Penguin Classics
Strassler - The Landmark Thucydides.

My first exercise of this sort involved reading Herodotus cover to cover and comparing with the Bible, especially the book of Esther. It was a tremendous eye opener as the match up was incredible and the commentaries I read completely missed things. Blogging about these topics is a good way to reinforce what I learn.

LoneRubberDragon said...

Do you ever read C. S. Lewis? I've heard some interesting passages relating to life, death, God, pleasure, and pain. For example, I heard taught in relation to Lewis from Gene Scott before, roughly, those who have not experienced pain, are living in an illusion. It is only when one feels suffering, that one knows something about truth and good, and an essence of what is evil's effect. Some pain is part of being tested, much like the Biblical Gold in the fire, to clear the dross, and keep the purified and tried metal.

He is a Christian Writer, and from the days of the 1900's, with an almost 1800's sense-ability of thought. Along with some of your suggestions, he sounds like another writer worth getting the whole book on.

So you teach the Bible. Oh man, that is a big responsibility, but impressive, if your words are as good as your sermons and expositions. I would have to study the Bible to be as good as I know I might be, but I am far from there, and I've read that the teachers are held to a high standard of accountability by God. I just can't remember the verses involved, as I read that almost 8 years ago, but I should dredge it back up, being on the internet speaking on what little I know, from my perspective, and from God.

So, otherwise, what work do you do, that is involved with nonlinear optimization and the occasional artificial intelligence submodule trials, and shortcomings of the packages? Maybe you have links or keywords from your blog on your work in nitty gritty and broad description? On my perusal through a *few* of your posts, I only say hints at your work and softwares.

Also, you can take him with a grain of salt, but here's a video stream link-in-page for Scott, for what it is worth, on general Biblical, thematic, historical, philosophical teachings, and great ways of speaking:

http://www.drgenescott.org/

Melissa Scott teaches more on linguistics in multi-languages, and has her own link-in page:

http://www.pastormelissascott.com/

Looney said...

Hello LRD,

My work has mostly been in the field of mechanics doing impact and crash dynamics. Much of this was car crash related and I did considerable work on airbag deployment and dummy modeling. Then there was modeling birds going into airplane engines, ship collisions, underwater mines next to ships ...

I have been fairly evenly split between software development, modeling various problems, and trouble shooting problems from various customers.

There are numerous sub-problems in the modeling that involve optimization in one form or another. Frequently the optimization problems are at a low level like in the material modeling physics and must be solved millions of times per simulation. In other cases, it is done at a macro level where we are sizing overall members of a design.

Looney said...

Also, I read C.S. Lewis a lot, but mostly when I was much younger so that I have forgotten much of what he wrote. I will take a look at the other links.

LoneRubberDragon said...

OH YEAH! I see. I've studied materials a little regarding deformations. Systematic nonlinearities of that nature, are some of the hardest problems to model, when considering plastic deformation is involved, and non-newtonian flow, like rheopectic and thixtropic fluids. I hear alot of empirical characterization modeling is required to bulk capture the dynamics in models, e.g. do a few test impacts, and then capture the numerical character of such systems, to help refine the finite element modeling systems. I agree, intelligent analytical methods, there, are well suited and matured, in such relatively well structured work-field types, with few evolutionary algorithm opportunities, there-too.

But, impressive work you do! And a theologian, and a Latinate ... incredible in these days. Kind of a Renaissance man, huh?

Regards, lrd.

LoneRubberDragon said...

Did you ever read C. S. Lewis, "The Screwtape Letters"?

Looney said...

LRD, thanks for the confidence, but I am certainly no where near a "Latinate". Even my pig-Latin is weak! I just picked up a few English translations of Latin writers and noted things that jumped out at me - and this due to reading commentaries which claimed insights based on classical literature. This has kept me a little focused in terms of what I select for reading.

My jobs have taken me around the world, so I have had a chance to see and experience far more than is reasonable. This experience is from God and is to be used for Him.

My most recent optimization problem has involved multiple materials with different non-linear equations of state and iterating to bring them into pressure equilibrium. That was the subject of my earlier link related to the genetic algorithm. This condition can also occur millions of times within a simulation, so the optimization algorithm must be robust over a very wide range of conditions and fast also.

I did read much of the Screwtape letters. This is actually something that I keep in mind with my other reading. I like to read classical writers to help learn about the context of the Bible, but C.S. Lewis warns that we shouldn't be keeping our thoughts so much on demons, but on the Lord and things which are good. A similar but bigger problem is probably associated with the scholarship of Ugaritic. They tend to be completely immersed in the literature of Baal, but how to keep your focus on the Lord? It is an interesting puzzle to me, and the Screwtape letters provide a warning that I always need to come back to Jesus and the Bible and keep my primary focus their.

LoneRubberDragon said...

Ig-pay atin-ley!? LOL. Funny guy!

Yeah, reading translations isn't exactly reading the latin, but your familiarity with Roman thinking and idea structures, depending on the closeness of the translation, is invaluable in many levels, like you've noted. Like biblos.com is great to see greek word-for-word interlineared, in one of their tools, making learning ancient Greek writing styles lucid. When I read french writers, well translated in Engish, one gets a good flavor of the french language and thought, straight through the english, as one can tell there's a nuance to ideas and word selecction that are noticeably foreign.

And my studies of Chinese and Japanese are even more interesting and challenging, as they are heirarchical languages in meaning where a few strokes in a combination called a radical have some abstract meaning, and then, are themselves, combined to make an ideogram square, and then those ideograms are often combined to make complex new words. For example take the english:

["computer"]

can be three chinese ideograms:

[計|算|機]

roughly translated

[計"idea" | 算"calculation" | 機"machine"]

made of radicals:
[計[accent bars and box | cross] |
算[double lambdas with bars | triple box | two legs with bar]
機[cross with two dropping branches | E looking mark | E looking mark | bar with swooping right descending hook crossed by left descending slash and accent | left descending slash crossing the bar and side branch on right]]

roughly translating the radicals as:

[計[speech | to-complete] |
算[bamboo | vision | presenting] |
機[tree | tiny | tiny | weapon | divines]

which in english words reflecting the culture reads roughly:

"an object for 計[wordings in completion, reminiscent of ideas] which are acted with 算[bamboo abacus examination and presentation, reminiscent of calculating] in the form of an 機["wooden" object with many tiny parts like weapon constuction which has operations, reminiscent of machine and performs (divines) things]"

and they simply think "computer" when they see this heirarchical [計"idea" | 算"calculation" | 機"machine"] tri-ideogram-chain.

Their whole language is couched in metaphor, and heirarchical dynamic thinking, with a heavy burden of ancient concepts, brought into the modern world. Like computers could be made of tiny wood machine parts abstract-concretely, like Babbages difference engine of gears, or Jacquards card loom, but are so much easier to make in silicon and doped circuits on the silicon.

~

Wow, sounds like an interesting recent problem in FE Analysis, you have had with multi-material partitions with optimizations.

I too, have been given a great deal of experience beyond my short years engineering, and a mind God gives much to to help the plan, I hope.

I don't know if you wrap assembly code in the inner loops for speed optimization of vector and matrix processes, but have you ever noticed late windows C compilers generate alot of overhead ASM lines, in what for a DSP processor code composer, would be a 4-8 ASM command inner loop? Another example of bloatware, I think(?).

~

Yeah, it is good to keep your eyes on God, but God also warns in an aphorism for "one to be wiser than the serpent", not by becomming evil, but to "know your enemy", to partially quote an old Frank Capra WW2 film. Moses was wiser than the serpent, and with God's power with him and Aaron, Aaron took his staff and turn it into a snake that ate the snakes of the Pharoe's magicians, because their magic, is no comparison to God's power authority.

The C.S. Lewis book I heard being taught, sounds like another interesting book to get, given your additional comments. Agreed, there isn't a demon around every corner, but evil and it's nature is ever present in the world. Thanks.

LoneRubberDragon said...

I can more formally reiterate the earlier position, on open system abiogenesis theory, with pseudocode and formal math equation, and verious points of statistical chemistry reference, here.

(Funny thing, though, in what I've read so far, I haven't been able to tell yet if you are a young earth (6000 year universe and earth) Creationist, or an old earth (12 billion, 4.5 billion year) Creationist?)

ABIOGENESIS CHEMICAL EVOLUTION

BACKGROUND
A natural combinatorial chemistry feedback, in an appropriate open system ocean, with inherent natural reactions and hypercycle catalytic reactions, which, alone, can suffice to create an increasing complexity chemistry that eventually intersects biochemistry, as evidenced by modern life. And an early earth ocean can have a greater amount of dissolved organics and minerals, with no presumed life forms processing the chemicals into their own makeups. It would all be dissolved in the oceans, and washing off the early continents in deltas, lake beds, or tidal mud flats, in evaporative concentrations.

COMBINATORIAL CHEMISTRY 1
Now combinatorial chemistry can be generalized to parallel numbers chemistry that combinatorially explores all feasable interactions of all chemical species available in a chemical environment, like an early earth ocean environment with bays, tides, hydrothermal vents, sunlight with or without UV, dark areas deep in the water or under rocks, for protection from UV and sunlight, lightning, pH variation, evaporative concentration, and currents to mix a natural initially inorganic chemical soup with hundreds of minearls, metal ions, etc. in a preorganic molecule soup.

HYPERCYCLE CATALYTIC CHEMISTRY
Hypercycle catalytic reactions are subsets of the whole combinatorial chemistry reaction matrix, where, A helps catalyze B helps catalyze C helps catalyze A, from other present chemical species, as an example of a short hypercycle loop of three nodes. Hypercycle catalytic reactions can be loops, and networks, embedded within a normal combinatorial chemistry matrix.

COMBINATORIAL CHEMISTRY 2
Going back to combinatorial chemistry, let's say in the ocean there's to begin with, 1000 Species of chemicals and chemical inducing factors, S, such as chemicals, photons of light from infrared to UV, radioactive particles in early half life rich early earth materials from its recent supernova formation, different energy free electrons from lightning, mixing currents, and heating and cooling around hydrothermal vents. There is an approximate top level pseudocode (which can be glossed over to reach final math characteristics after the pseudocode) of a differential equation that shows the equilibrium balance of reactions, is:

InitialSpecies = S;
InitialAverageConcentration = 0;
for(s = 1 to S)
{
InitialAverageConcentration += Concentration{s} / S;
}
for(s = 1 to S) //how many species in a reaction
{
__Reaction = array{s elements};
__for(s1 = 1 to s)
__{
____for(s2 = s1+1 to s)
____{
______for(s3 = s2+1 to s)
______{
... //nest to depth of s
______________for(ss = ss-1 to s)
______________{
________________if( all sx < sx+1, and all sx != sy) //no repeats
________________{
__________________//calculate net chem species present change
__________________//for this specie reaction set for a unit of differential time
__________________NewSpecies{S' set} = F1(Reaction{s1,s2...ss});
__________________NewConcentration{S + S' set} = F2(Reaction{s1,s2...ss});
________________}
______________}
... //nest to depth of s
______}
____}
__}
__FinalSpecies = S + S';
__FinalAverageConcentration = 0;
__for(s = 1 to S + S')
__{
____FinalAverageConcentration += Concentration{s} / (S + S');
__}
}

Linguistically, this can be interpreted as, taking 1 to S chemicals at a time, in every combination, to observe reaction rates of current S chemical species, s at a time, to see the effect on all S and possible new S' chemical species generated that were previously not existing before. For example, for two species taken from a given 1000 species, S, we see there is (1/2)*(S^2 - S), or 499,500 Reaction{s1,s2} nodes, with positive or negative reaction rates for existing species S, or new species of S'. That is, say, S1 + S2 might breakdown S1, catalytically by S2, into S3 and S4, and S2 remains untouched. S1 has a negative reaction rate as it breaks down into trace amounts of S1, while S3 and S4 have positive reaction rates, as S1 is turned into S3 and S4, in the presence of S2. On the other hand, say, S1 + S2 helps produce a totally new chemical outside of S, of S'1, by S1 and S2 combining to form S'1. S1 and S2 have negative reaction rates being consumed, as the new S'1 has positive reaction rates. These reaction rates also change in time, as the concentrations used by F1(Reaction{s set}) and F2(Reaction{s set}) calculations, increase or decrease accordingly.

At the same time, there are more reactions to analyze, continuing with three chemicals in a Reaction{s1,s2,s3} analysis, where there is (1/2)(1/3)*(S^3 - S) or about 167 million reaction nodes. So of these millions of Reaction{s1,s2,s3}, many will have no effects, some will break down or build up products already existing, and others will make new chemical species that never existed before, from the species that exist in the ocean to begin with, S.

Mathematically analyzing reactant combinations, from s = 1 for single molecule auto-reactions, to s = S, for S species reaction, in total, there are:

ReactionNodes =
+SUM( s=1 to S: of: Factorial(S) / (Factorial(s)Factorial(S-s)) ),

or, equivalently,

ReactionNodes = 2^S - 1 =
+2^1000 - 1 ~=
+10^301,

reaction nodes for 1000 chemical species S, where,

(1) the majority of non-reactions change nothing, (2) some break down species, (3) some build up species, and (4) some generate new chemical species. So starting with 1000 chemical species, with an S' formed out of 10^301 of, say, 1000 new chemical species S' (a conservative rate of 1 in 10^298 being effective stable new chemical species), such that in a year, there can be 2000 species of flourishing chemicals, leading to 10^602 reaction nodes to analyze for all potential reactions at each node, generating, say, 2000 new species of chemicals (at an even more conservative rate of new chemical specie formation). So then after another year there's 4000 chemical species at some concentration, with 10^1204 reaction nodes, generating, say, 4000 new species (even more conservative to the combinations available), added into next year's variation.

So one can see an exponential feedback of chemical species, some more robust than others, in numbers, durability, variation, reaction rate selection forces, hypercycle catalytic reproduction, and reactivity, from 1000 to 2000 to 4000 and so on, until there is a low but signifigant saturation of millions of reactive catalytic various chemical species in a gallon of ocean, all competing for the ocean's limited chemical resouces, and giving rise to potential natural metabolic pathways absorbing glucose and photons of light, in complex reaction sets, paths, cycles, and netowrks, that support reproducing hypercycle networks of catalytic chemicals, all inherent and naturally contained, in the combinatorial chemistry feedback matrix growing in time. Presumably, something akin to photosynthesis must have arose early to convert the atmosphere to mostly oxygen, as part of sugar production.

A Cretionist claim would have to show that of the 2^S reaction nodes, in an S chemical specie example ocean, would permit no (zero) new chemical species to form and thus remain in static chemical equilibrium. But given the massiveness of potential in 10^301 reaction combinations in a mixing ocean of a combinatorial chemistry size S in a feedback, if it shows even a very minor positive rate of new chemical species formation, that such a non-zero feedback would provide a numerical backbone to natural blind chemical evolution turning into life, as chemical species reach continually higher levels of complexity and variety, with competition and selection forces, in the combinatorial chemistry in feedback, from the very beginning of chemistry, in robust reactive new molecules, contained in chained catalytic reactions, and with a form of digital chemistry, contained in the discrete chemical species, and in the discrete codes of polymer proteins, RNA, and DNA nucleotide chains, that are eventually intersected by combinatorial chemistry, with a proven positive dS/dt.

Even just 100 chemicals in an initial energy open system ocean, would allow 2^100, or 10^30 possible reactions, so even small chemical soups start with an inherent potential for new chemical specie feedback growth of complexity, without external guidance being absolute necessity.

REFERNCE MATERIAL
Clay catalyzation of existing RNA base polymerization, and adsorbtion and release characteristics:

http://www.rpi.edu/dept/chem/chem_faculty/profiles/pdfs/ferris/ELEM_V1n3_145-150.pdf

http://www.ncbi.nlm.nih.gov/pubmed/11539614

Lipid and early combinatorial chemistry protocell theory:

http://exploringorigins.org/protocells.html

Hypercycle chemistry:

http://en.wikipedia.org/wiki/Manfred_Eigen

Combinatorial chemistry:

http://en.wikipedia.org/wiki/Combinatorial_chemistry

http://en.wikipedia.org/wiki/Oparin

http://en.wikipedia.org/wiki/Abiogenesis

Miscellaneous:

http://en.wikipedia.org/wiki/Miller_urey

http://en.wikipedia.org/wiki/Abiogenesis

http://en.wikipedia.org/wiki/Astrochemistry

MODERN SEA WATER CONTENTS (WITH MOST BIOCHEMICALS CAPTURED IN LIVING MATTER NOT DISSOLVED IN WATER):

http://en.wikipedia.org/wiki/Image:Sea_salt-e_hg.svg

http://en.wikipedia.org/wiki/Sea_water#Geochemical_explanations

http://en.wikipedia.org/wiki/Sea_water#Salinity

http://www.seafriends.org.nz/oceano/seawater.htm#composition

http://www.sciencedaily.com/releases/1998/02/980204071316.htm