Measuring Complex Specified Information with Respect to Biology
http://intelligentreasoning.blogspot.com.br/2012/01/measuring-complex-specified-information.html
Measuring Complex Specified Information with Respect to Biology
-
Once again, I don't know why this is so difficult, but here it is:
Complex specified information is a specified subset of Shannon information. That means that complex specified information is Shannon information of a specified nature, ie with meaning and/ or function, and with a specified complexity.
Shannon's tells us that since there are 4 possible nucleotides, 4 = 2^2 = 2 bits of information per nucleotide. Also there are 64 different coding codons, 64 = 2^6 = 6 bits of information per amino acid, which, is the same as the three nucleotides it was translated from.
Take that and for example a 100 amino acid long functioning protein- a protein that cannot tolerate any variation, which means it is tightly specified and just do the math 100 x 6 + 6 (stop) = 606 bits of specified information- minimum, to get that protein. That means CSI is present and design is strongly supported.
Now if any sequence of those 100 amino acids can produce that protein then it isn't specified. IOW if every possible combo produced the same resulting protein, I would say that would put a hurt on the design inference.
The variational tolerance has to be figured in with the number of bits.
from Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, “Measuring the functional sequence complexity of proteins,” Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):
Neither RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information—functional information—is required.
With text we use 5 bits per character which gives us the 26 letters of the alphabet and 6 other characters. The paper below puts it all together- peer-review. It tells you exactly how to measure the functional information, which is exactly what Dembski and Meyer are talking about wrt CSI. So read the paper it tells how to do exactly what you have been saying no one knows how to do- it isn't pro-ID and the use of AVIDA as evidence of "emergence" is dubious*, but the math is there for you to misunderstand or not comprehend.
http://intelligentreasoning.blogspot.com.br/2012/01/measuring-complex-specified-information.html
Measuring Complex Specified Information with Respect to Biology
-
Once again, I don't know why this is so difficult, but here it is:
Complex specified information is a specified subset of Shannon information. That means that complex specified information is Shannon information of a specified nature, ie with meaning and/ or function, and with a specified complexity.
Shannon's tells us that since there are 4 possible nucleotides, 4 = 2^2 = 2 bits of information per nucleotide. Also there are 64 different coding codons, 64 = 2^6 = 6 bits of information per amino acid, which, is the same as the three nucleotides it was translated from.
Take that and for example a 100 amino acid long functioning protein- a protein that cannot tolerate any variation, which means it is tightly specified and just do the math 100 x 6 + 6 (stop) = 606 bits of specified information- minimum, to get that protein. That means CSI is present and design is strongly supported.
Now if any sequence of those 100 amino acids can produce that protein then it isn't specified. IOW if every possible combo produced the same resulting protein, I would say that would put a hurt on the design inference.
The variational tolerance has to be figured in with the number of bits.
from Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, “Measuring the functional sequence complexity of proteins,” Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):
Neither RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information—functional information—is required.
With text we use 5 bits per character which gives us the 26 letters of the alphabet and 6 other characters. The paper below puts it all together- peer-review. It tells you exactly how to measure the functional information, which is exactly what Dembski and Meyer are talking about wrt CSI. So read the paper it tells how to do exactly what you have been saying no one knows how to do- it isn't pro-ID and the use of AVIDA as evidence of "emergence" is dubious*, but the math is there for you to misunderstand or not comprehend.