AMBER Archive (2003)Subject: Re: AMBER: Implicit precision in sander vs architecture
From: David E. Konerding (dekonerding_at_lbl.gov)
Date: Fri Oct 17 2003 - 12:05:14 CDT
Yong Duan wrote:
>Well, not really.
>
>32-bit system refers to the hardware capacity. One can use 2-bit to
>construct a 64-bit (or longer) word. That's how it is done on some of
>those old calculators. In those good-old days when Intel still called
>their first generation chips 8080 (which was the chip for IBM PC and IBM
>PC compatibles), one still needed to use 64-bit words for accuracy
>purposes.
>
>The consequence is the speed. To do a 64-bit floating point add (which
>is the only logic hard-wired in computer, multiplication, even
>subtraction are just variations of addition. In other words, computers
>really only know how to add.), 32-bit machines need to do it at least
>twice (depending on the length of their registers).
>
>I heard this talk. The speaker claimed that the 64-bit machine made it
>possible to construct the first human gene map because 32-bit machine
>would only allow one to represent data upto 4 billion. Gosh, what if our
>genes were one bit too long to be stored as a 64-bit integer? We would
>be doomed! I think we all had a big sigh of relief that our genes are
>not that long. We humanbeings are smart, but not too smart :).
>
>So, the short answer to your question is, no, do not worry. Unless you
>specify "REAL*4" or single precision, you will always get REAL*8 or
>"double precision" for free even though your machine could be a 32-bit
>machine. This is the beauty of AMBER --- it doubles the capacity of your
>machines. So stay with AMBER!! :)
>
>
Yong is confusing the meaning of 64-bit as it is conventionally used in
the marketing literature with the actual
technical details of how integer data is implemented on digital hardware.
64-bit is normally used these days to describe the size of the address
register and the amount of directly addressable memory.
It's not as relevant when applied to the size of the arithmetical,
integer, or floating point registers. For example, my
32-bit Intel PC can only address memory using a 32-bit range, but it
natively implements a 64-bit integer and an 80-bit floating
point type. No extra "work" or "passes" are being done when I add two
64-bit integers or multiple to 80-bit FP ones (on my hardware).
Furthermore, Yong, with regard to the 64-bit human genome mapping. THey
made a totally valid point (given their algorithmic design):
their map assembly program, which was not operating in gene sequence
space but in chromosomal contig space, many of which were millions of
bases long, held
all contigs in memory at once. His point was that there were too many
contigs (due to the massively overlapping shotgun sequencing used) to
hold in RAM at once.
I doubt the speaker (which, was mostly like Gene Myers or somebody who
worked for him) actually intentionally claimed that gene sequences were
expressible
in 64-bit data types but too large to hold in a 32-bit memory. It
should be noted the public genome project managed to assemble their
genome on a cluster of 32-bit
machines with a different algorithm.
AMBER does all its important calculations in 80-bit floating point
representation on most typical hardware architectures. How precisely it
computes results depends entirely
on the implementation details, which can vary widely between two
architectures. Whether that is the cause of the user's original problem
(seeing a vlimit exceeded on a Xeon but not seeing it on an IBM) suggests
that they already had a 'hotspot' in their system, which was leading to
large forces, but that the different implementations of the math on the
two architectures diverged, one in a direction that led to a failure.
Dave
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
|