Friday, December 31, 2010


Di dalam sitosol juga ditemukan adanya sitoskeleton yang tersusun atas mikrotubulus, mikrofilamen dan filamen intermediat. Sitoskeleton berfungsi untuk menyokong bentuk sel dan memungkin terjadinya gerakan-gerakan organel di dalam sitoplasma. Mikrotubulus ada yang Ietaknya terbenam di dalam sitosol, dinamakan mikrotubulus sitoplasmik
dan ada juga yang berfungsi sebagai penyusun organel , sepe’rti silia, flagela,
dan sentriol. Mikrofilamen merupakan protein konEaktil yang berfr-rngsi
untuk pergerakan di dalam sitoplasma, misalnya aliran sitoplasma cii clalanr
sel tumbuhan dan gerak amoeboid pada leukosit.

untuk mengetahui sel prokariotik dan sel eukariotik silahkan klik di Sel Prokariotik Dan Eukariotik

Tuesday, December 28, 2010

Intermediate Physic

Intermediate Physics Lab (P440/441) – Preparation of Manuscripts
Rather than a conventional lab report, the group will produce a manuscript in the form required for the
research journal Physical Review Letters. The only difference will be the length: your manuscript will be
no more than nine pages total, including abstract, text, references, figures & figure captions, and tables.
The purposes of the manuscript are the same as for the final presentation, which we repeat here:
(a) To tell us what you did and why. Do not assume that your reader already knows the new physics that
you have had to learn to do the experiment. Do not assume your reader knows anything at all about the
experimental setup. Show at least some real data to make your reader believe you really did an experiment.
(b) To convince your always-skeptical reader that you understand the main sources of uncertainty in your
(c) To present your results in an honest, straightforward way. Do not oversell by pretending to measure
results that you really didn't; do not undersell by failing to extract all the possible value from your data and
observations. Of course, any omissions or errors that came to light during the final presentation must be
dealt with correctly in the final manuscript.
(d) To present raw data as well as the results of analysis. Large data tables are not helpful; usually a plot
will suffice. The text can provide, for example the parameters from a curve-fitting analysis (in which case
the best-fit curve should be shown on the plot).
The journal Physical Review Letters (PRL) provides guides to style and formatting, to which you can refer.
They are available on the web (along with the journal itself) at You can also find
information on the PACS numbers here or use Google. But please do not worry about details of font type,
and do note that the length limitation that we use (specified on the next page) differs from that of the
1. Get a copy of a recent article in PRL, either electronically or at the library, to use as a guide to follow.
NOTE: As printed, the manuscript is two columns, single spaced, with figures embedded. As submitted to
instructors and your peers, the manuscript is one column, double spaced. To facilitate editing, place each
figure on a separate page at the end of the manuscript. Each figure must have an informative caption ("Fig.
1. Diagram of the apparatus. The blah, blah, blah...."). See the following page for details.
2. The designated first author will produce a manuscript using a word processor (or Latex/Revtex, if you
know how and wish to). The draft must include all necessary elements: Abstract, PACs numbers,
references, figure captions, and figures.
3. Print one copy for the instructors and give it to them in person or to the designated place. These
manuscripts are due on the designated date and must meet the style requirements. Using the form on
the next page, the instructors will do a quick “editorial style check”. Major noncompliance with the style
guidelines must be repaired immediately before further consideration of the manuscript. Once the
style guidelines are met, we will review the draft and add our comments.
4. At the same time, distribute copies of the draft for the other group members. (Use the ILab file cabinet
and email notices to aid in distribution to your colleagues.)
5. The other group members will mark up their copies of the draft, suggesting changes both minor and
major. We expect substantial editing from each group member, not just a few minor changes.
6. The final version of the manuscript is due one week after comments on the edited drafts are
received. (This is specified in the course schedule.) IMPORTANT: Please bring the first drafts with the
written comments to the lab or to the instructor’s office, along with a single copy of the final manuscript.
Monday, December 20, 2010

Advanced Physics

Advanced Character Physics

This paper explains the basic elements of an approach to physically-based modeling which is well suited for interactive use. It is simple, fast, and quite stable, and in its basic version the method does not require knowledge of advanced mathematical subjects (although it is based on a solid mathematical foundation). It allows for simulation of both cloth; soft and rigid bodies; and even articulated or constrained bodies using both forward and inverse kinematics.

The algorithms were developed for IO Interactive’s game Hitman: Codename 47. There, among other things, the physics system was responsible for the movement of cloth, plants, rigid bodies, and for making dead human bodies fall in unique ways depending on where they were hit, fully interacting with the environment (resulting in the press oxymoron “lifelike death animations”).

The article also deals with subtleties like penetration test optimization and friction handling.

1 Introduction
The use of physically-based modeling to produce nice-looking animation has been considered for some time and many of the existing techniques are fairly sophisticated. Different approaches have been proposed in the literature [Baraff, Mirtich, Witkin, and others] and much effort has been put into the construction of algorithms that are accurate and reliable. Actually, precise simulation methods for physics and dynamics have been known for quite some time from engineering. However, for games and interactive use, accuracy is really not the primary concern (although it’s certainly nice to have) – rather, here the important goals are believability (the programmer can cheat as much as he wants if the player still feels immersed) and speed of execution (only a certain time per frame will be allocated to the physics engine). In the case of physics simulation, the word believability also covers stability; a method is no good if objects seem to drift through obstacles or vibrate when they should be lying still, or if cloth particles tend to “blow up”.

The methods demonstrated in this paper were created in an attempt to reach these goals. The algorithms were developed and implemented by the author for use in IO Interactive’s computer game Hitman: Codename 47, and have all been integrated in IO’s in-house game engine Glacier. The methods proved to be quite simple to implement (compared to other schemes at least) and have high performance.

The algorithm is iterative such that, from a certain point, it can be stopped at any time. This gives us a very useful time/accuracy trade-off: If a small source of inaccuracy is accepted, the code can be allowed to run faster; this error margin can even be adjusted adaptively at run-time. In some cases, the method is as much as an order of magnitude faster than other existing methods. It also handles both collision and resting contact in the same framework and nicely copes with stacked boxes and other situations that stress a physics engine.

In overview, the success of the method comes from the right combination of several techniques that all benefit from each other:
• A so-called Verlet integration scheme.
• Handling collisions and penetrations by projection.
• A simple constraint solver using relaxation.
• A nice square root approximation that gives a solid speed-up.
• Modeling rigid bodies as particles with constraints.
• An optimized collision engine with the ability to calculate penetration depths.

Each of the above subjects will be explained shortly. In writing this document, the author has tried to make it accessible to the widest possible audience without losing vital information necessary for implementation. This means that technical mathematical explanations and notions are kept to a minimum if not crucial to understanding the subject. The goal is demonstrating the possibility of implementing quite advanced and stable physics simulations without dealing with loads of mathematical intricacies.

The content is organized as follows. First, in Section 2, a “velocity-less” representation of a particle system will be described. It has several advantages, stability most notably and the fact that constraints are simple to implement. Section 3 describes how collision handling takes place. Then, in Section 4, the particle system is extended with constraints allowing us to model cloth. Section 5 explains how to set up a suitably constrained particle system in order to emulate a rigid body. Next, in Section 6, it is demonstrated how to further extend the system to allow articulated bodies (that is, systems of interconnected rigid bodies with angular and other constraints). Section 7 contains various notes and shares some experience on implementing friction
etc. Finally, in Section 8 a brief conclusion.

In the following, bold typeface indicates vectors. Vector components are indexed by using subscript, i.e., x=(x1, x2, x3).

2 Verlet integration
The heart of the simulation is a particle system. Typically, in implementations of particle systems, each particle has two main variables: Its position x and its velocity v. Then in the time-stepping loop, the new position x’ and velocity v’ are often computed by applying the rules

where t is the time step, and a is the acceleration computed using Newton’s law f=ma (where f is the accumulated force acting on the particle). This is simple Euler integration.

Here, however, we choose a velocity-less representation and another integration scheme: Instead of storing each particle’s position and velocity, we store its current position x and its previous position x*. Keeping the time step fixed, the update rule (or integration step) is then

This is called Verlet integration (see [Verlet]) and is used intensely when simulating molecular dynamics. It is quite stable since the velocity is implicitly given and consequently it is harder for velocity and position to come out of sync. (As a side note, the well-known demo effect for creating ripples in water uses a similar approach.) It works due to the fact that 2x-x*=x+(x-x*) and x-x* is an approximation of the current velocity (actually, it’s the distance traveled last time step). It is not always very accurate (energy might leave the system, i.e., dissipate) but it’s fast and stable. By lowering the value 2 to something like 1.99 a small amount of drag can also be introduced to the system.

At the end of each step, for each particle the current position x gets stored in the corresponding variable x*. Note that when manipulating many particles, a useful optimization is possible by simply swapping array pointers.

The resulting code would look something like this (the Vector3 class should contain the appropriate member functions and overloaded operators for manipulation of vectors):

// Sample code for physics simulation
class ParticleSystem {
Vector3 m_x[NUM_PARTICLES]; // Current positions
Vector3 m_oldx[NUM_PARTICLES]; // Previous positions
Vector3 m_a[NUM_PARTICLES]; // Force accumulators
Vector3 m_vGravity; // Gravity
float m_fTimeStep;
void TimeStep();
void Verlet();
void SatisfyConstraints();
void AccumulateForces();
// (constructors, initialization etc. omitted)
// Verlet integration step
void ParticleSystem::Verlet() {
for(int i=0; i100). As a direct result, the two particles will never come too close to each other. See Figure 8.

Another method for restraining angles is to satisfy a dot product constraint:

Particles can also be restricted to move, for example, in certain planes only. Once again, particles with positions not satisfying the above-mentioned constraints should be moved – deciding exactly how is slightly more complicated that with the stick constraints.

Actually, in Hitman corpses aren’t composed of rigid bodies modeled by tetrahedrons. They are simpler yet, as they consist of particles connected by stick constraints in effect forming stick figures. See Figure 9. The position and orientation for each limb (a vector and a matrix) are then derived for rendering purposes from the particle positions using various cross products and vector normalizations (making certain that knees and elbows bend naturally).

In other words, seen isolated each limb is not a rigid body with the usual 6 degrees of freedom. This means that physically the rotation around the length axis of a limb is not simulated. Instead, the skeletal animation system used to setup the polygonal mesh of the character is forced to orientate the leg, for instance, such that the knee appears to bend naturally. Since rotation of legs and arms around the length axis does not comprise the essential motion of a falling human body, this works out okay and actually optimizes speed by a great deal.

Angular constraints are implemented to enforce limitations of the human anatomy. Simple self collision is taken care of by strategically introducing inequality distance constraints as discussed above, for example between the two knees – making sure that the legs never cross.

For collision with the environment, which consists of triangles, each stick is modeled as a capped cylinder. Somewhere in the collision system, a subroutine handles collisions between capped cylinders and triangles. When a collision is found, the penetration depth and points are extracted, and the collision is then handled for the offending stick in question exactly as described in the beginning of Section 5.

Naturally, a lot of additional tweaking was necessary to get the result just right.

This section contains various remarks that didn’t fit anywhere else.

Motion control
To influence the motion of a simulated object, one simply moves the particles correspondingly. If a person is hit at the shoulder, move the shoulder particle backwards over a distance proportional to the strength of the blow. The Verlet integrator will then automatically set the shoulder in motion.

This also makes it easy for the simulation to ‘inherit’ velocities from an underlying traditional animation system. Simply record the positions of the particles for two frames and then give them to the Verlet integrator, which then automatically continues the motion. Bombs can be implemented by pushing each particle in the system away from the explosion over a distance inversely proportional to the square distance between the particle and the bomb center.

It is possible to constrain a specific limb, say the hand, to a fixed position in space. In this way, one can implement inverse kinematics (IK): Inside the relaxation loop, keep setting the position of a specific particle (or several particles) to the position(s) wanted. Giving the particle infinite mass (invmass=0) helps making it immovable to the physics system. In Hitman, this strategy is used when dragging corpses; the hand (or neck or foot) of the corpse is constrained to follow the hand of the player.

Handling friction
Friction has not been taken care of yet. This means that unless we do something more, particles will slide along the floor as if it were made of ice. According to the Coulomb friction model, friction force depends on the size of the normal force between the objects in contact. To implement this, we measure the penetration depth dp when a penetration has occurred (before projecting the penetration point out of the obstacle). After projecting the particle onto the surface, the tangential velocity vt is then reduced by an amount proportional to dp (the proportion factor being the friction constant). This is done by appropriately modifying x*. See the Figure 10. Care should be taken that the tangential velocity does not reverse its direction – in this case one should simply be set it to zero since this indicates that the penetration point has seized to move tangentially. Other and better friction models than this could and should be implemented.

Collision detection
One of the bottlenecks in physics simulation as presented here lies in the collision detection, which is potentially performed several times inside the relaxation loop. It is possible, however, to iterate a different number of times over the various constraints and still obtain good results.

In Hitman, the collision system works by culling all triangles inside the bounding box of the object simulated (this is done using a octtree approach). For each (static, background) triangle, a structure for fast collision queries against capped cylinders is then constructed and cached. This strategy gave quite a speed boost.

To prevent objects that are moving really fast from passing through other obstacles (because of too large time steps), a simple test if performed. Imagine the line (or a capped cylinder of proper radius) beginning at the position of the object’s midpoint last frame and ending at the position of the object’s midpoint at the current frame. If this line hits anything, then the object position is set to the point of collision. Though this can theoretically give problems, in practice it works fine.

Another collision ‘cheat’ is used for dead bodies. If the unusual thing happens that a fast moving limb ends up being placed with the ends of the capped cylinder on each side of a wall, the cylinder is projected to the side of the wall where the cylinder is connected to the torso.

The number of relaxation iterations used in Hitman vary between 1 and 10 with the kind of object simulated. Although this is not enough to accurately solve the global system of constraints, it is sufficient to make motion seem natural. The nice thing about this scheme is that inaccuracies do not accumulate or persist visually in the system causing object drift or the like – in some sense the combination of projection and the Verlet scheme manages to distribute complex calculations over several frames (other schemes have to use further stabilization techniques, like Baumgarte stabilization). Fortunately, the inaccuracies are smallest or even nonexistent when there is little motion and greatest when there is heavy motion – this is nice since fast or complex motion somewhat masks small inaccuracies for the human eye.

A kind of soft bodies can also be implemented by using ‘soft’ constraints, i.e., constraints that are allowed to have only a certain percentage of the deviation ‘repaired’ each frame (i.e., if the rest length of a stick between two particles is 100 but the actual distance is 60, the relaxation code could first set the distance to 80 instead of 100, next frame 90, 95, 97.5 etc.).

As mentioned, we have purposefully refrained from using heavy mathematical notation in order to reach an audience with a broader background. This means that even though the methods presented are firmly based mathematically, their origins may appear somewhat vague or even magical.

For the mathematically inclined, however, what we are doing is actually a sort of time-stepping approach to solving differential inclusions (a variant of differential equations) using a simple sort of interior-point algorithm (see [Stewart] where a similar approach is discussed). When trying to satisfy the constraints, we are actually projecting the system state onto the manifold described by the constraints. This, in turn, is done by solving a system of linear equations. The linear equations or code to solve the constraints can be obtained by deriving the Jacobian of the constraint functions. In this article, relaxation has been discussed as an implicit way of solving the system. Although we haven’t touched the subject here, it is sometimes useful to change the relaxation coefficient or even to use over-relaxation (see [Press] for an explanation). Since relaxation solvers sometimes converge slowly, one might also choose to explicitly construct the equation system and use other methods to solve it (for example a sparse matrix conjugate gradient descent solver with preconditioning using the results from the previous frame (thereby utilizing coherence)).

Note that the Verlet integrator scheme exists in a number of variants, e.g., the Leapfrog integrator and the velocity Verlet integrator. Accuracy might be improved by using these.

Singularities (divisions by zero usually brought about by coinciding particles) can be handled by slightly dislocating particles at random.

As an optimization, bodies should time out when they have fallen to rest.

To toy with the animation system for dead characters in Hitman: Codename 47, open the Hitman.ini file and add the two lines “enableconsole 1” and “consolecmd ip_debug 1” at the bottom. Pointing the cursor at an enemy and pressing shift+F9 will cause a small bomb to explode in his vicinity sending him flying. Press K to toggle free-cam mode (camera is controlled by cursor keys, shift, and ctrl).

Note that since all operations basically take place on the particle level, the algorithms should be very suitable for vector processing (Playstation 2 for example).

8 Conclusion
This paper has described how a physics system was implemented in Hitman. The underlying philosophy of combining iterative methods with a stable integrator has proven to be successful and useful for implementation in computer games. Most notably, the unified particle-based framework, which handles both collisions and contact, and the ability to trade off speed vs. accuracy without accumulating visually obvious errors are powerful features. Naturally, there are still many specifics that can be improved upon. In particular, the tetrahedron model for rigid bodies needs some work. This is in the works.

At IO Interactive, we have recently done some experiments with interactive water and gas simulation using the full Navier-Stokes equations. We are currently looking into applying techniques similar to the ones demonstrated in this paper in the hope to produce faster and more stable water simulation.

9 Acknowledgements
The author wishes to thank Jeroen Wagenaar for fruitful discussions and the entire crew at IO Interactive for cooperation and for producing such a great working environment.

Feedback and comments are very welcome at

[Baraff] Baraff, David, Dynamic Simulation of Non-Penetrating Rigid Bodies, Ph.D. thesis, Dept. of Computer Science, Cornell University, 1992.
[Mirtich] Mirtich, Brian V., Impulse-base Dynamic Simulation of Rigid Body Systems, Ph.D. thesis, University of California at Berkeley, 1996.
[Press] Press, William H. et al, Numerical Recipes, Cambridge University Press, 1993.
[Stewart] Stewart, D. E., and J. C. Trinkle, “An Implicit Time-Stepping Scheme for Rigid Body Dynamics with Inelastic Collisions and Coulomb Friction”, International Journal of Numerical Methods in Engineering, to appear.
[Verlet] Verlet, L. "Computer experiments on classical fluids. I. Thermodynamical properties of Lennard-Jones molecules", Phys. Rev., 159, 98-103 (1967).
[Witkin] Witkin, Andrew and David Baraff, Physically Based Modeling: Principles and Practice, Siggraph ’97 course notes, 1997.
Saturday, December 18, 2010

Basic Physic

Energy - a Basic Physics Concept and a Social Value

Abstract: Though it emerged relatively recently as a physics concept, energy has become the most transcendent concept in physics and a pervasive entity in our lives. Thirty years ago the Arab Oil Embargo caused us to stop taking gasoline for granted and caused me to start teaching students about the importance of energy and give special emphasis to the physics underlying it. Most recently my appreciation of energy was enhanced by developing a workshop manual on this topic for the Physics Teaching Resource Agent program of the American Association of Physics Teachers. I would like to share with you some of the key insights I gained from that experience.

Thirty years ago I began teaching at The Calhoun School in New York City. Soon after I arrived, the Arab Oil Embargo meant that the availability of gasoline at the corner service station could no longer be taken for granted, and before year's end I would pay in excess of a dollar for a gallon of it for the first time. The term "energy crisis" entered our vocabulary, and at Calhoun we decided to start a seminar about it.

That seminar later led to more organized and systematic teaching about energy, first in a course on "Critical Social Issues" and later in a physical science course called "Energy for the Future." I got involved with the educational work of the National Energy Foundation, then headquartered in New York City, spent two summers working on NSTA's "Project for an Energy Enriched Curriculum," and became a Resource Agent for the New York Energy Education Project.

Although my energy-focused physical science course gave way to Conceptual Physics and later Active Physics, after Paul Hewitt convinced me in 1989 that physics could and should be taught to ninth graders, only last year did I return to my earlier "life" as an energy educator and develop an Active Physics-formatted chapter on energy issues, in which the challenge was the same as the final exam of my former course: for students to plan their energy future without

fossil fuels.

I reported on this chapter at last summer's meeting1 at one of the Physics and Society's sessions, and I fell into telling Jim Nelson about it during the weeklong PTRA institute that preceded the meeting. Before week's end I heard him ask me, "How would you like to develop a workshop manual on energy for us?"

As you know, it's hard to say "no" to Jim Nelson; and, besides, I looked at this as a new opportunity to address a topic that had always seemed to hold out a dual appeal to me: energy was at once the vital essence we needed to make things happen in our lives and also the most elusive concept I had ever encountered, yet one which made its presence felt in every nook and cranny of physics. I had long rejected the textbook definition that "energy is the ability to do work," yet never felt comfortable with any pat alternative.

I asked Jim whether I should include stuff about energy issues that I used to include in my energy-focused physical science course and also included in my Active-Physics formatted chapter, and he said "yes." But I knew, from the format of the many PTRA workshop manuals I had seen over the years, that he wanted the basic stuff in there, too, and that this would mean motivating the basic concept of energy.

I ended up liking this activity so well that I had my students do it last year. One group obtained the data for force vs. distance along the slope shown in Fig. 1, which you can see looks like an inverse type of relationship. Borrowing from what I have learned about the Modeling approach to linearize graphs, I then asked them to plot force vs. the reciprocal of distance, and they got the linear relationship shown in Fig. 2.

The consequence of this relationship between force and distance along the slope is that, regardless of the slope, the product of the force and distance is an invariant. Now invariance is an indication that something is special in science. This told me that this product of force and distance had some special significance, which in turn could merit giving it a special name, which, for want of further originality, we could call "work."

But I felt that more than just the concept of work was motivated by this invariance of force x distance. All the expressions for work done were equal to the work required to lift the cart up directly, and this further motivates the concept of potential energy as something that is gained by an object when it is lifted, with the potential energy gain equal to the work done.

If potential energy is gained when a roller coaster is lifted to the top of the first hill, it is lost when the coaster goes down the hill. But when it rolls down the hill, the coaster starts to move, and it moves faster the farther it rolls down the hill. Is there a correspondence between the increase in motion and the decrease in potential energy? If so, can we say that the potential energy is not "lost" but rather "transformed" into something related to the cart's motion as it rolls down the hill?

The advent of photogates to use with CBLs and LabPros meant we could try that one too -- in fact, one book I will never write is "Physics Without Photogates." The results from one of my groups of students are shown in Fig. 3. That a graph of velocity vs. PE lost veers off to the right of a straight line suggests linearizing by plotting the square of velocity vs. PE lost (Fig. 4).

Here I went a step further, one that I learned last summer in the PTRA "Graphical Analysis" workshop conducted by Modelers Rex and Debbie Rice. They taught me to determine the equation for the straight line by measuring the slope and identifying its units, which in this case turn out to be the reciprocal of mass in kg. I then sought to express the slope as a number divided by the only mass in this experiment, the mass of the cart. The whole number closest to

my numerator was "2," and I was experimentally led to the conventional expression for kinetic energy.

I was really starting to enjoy this odyssey, in which I could not only motivate but also determine the conventional expressions for energy experimentally. This part was, in fact, a continuation of my realization at last summer's PTRA workshop at the Harrisonburg, VA, "rural center" at James Madison University that we were able to derive the equations of motion experimentally in the "Kinematics" workshop there.

But would it work for elastic energy? I mulled this one around for some time, because I knew that there were added complications -- the presence of kinetic and gravitational potential as well as elastic energy. I settled on a vertical oscillating spring, because I had previously been able to make good measurements of its position with a motion sensor (Fig. 5). Just as I had

determined the expression for kinetic energy by finding out what function of velocity corresponded to gravitational potential energy lost by a cart rolling down an incline, I how used kinetic energy lost as a way to measure the potential energy of the oscillating spring. A graph of displacement vs. PE for my 48 data points (Fig. 6) looks absolutely cacophonous, but when I squared the displacement, a linear pattern started to emerge (Fig. 7). Furthermore, the units of the slope turn out to be the reciprocal of those for the spring constant. The slope obtained from doing a linear regression on my TI-83 turned out to be remarkably close to 2 divided by the spring constant.

Thus began a new odyssey for me. I started by searching for a way to motivate the concept of energy that would be interesting and relevant to students' lives -- and came up with the idea of designing a roller coaster. The Physics Day at the Amusement Park worksheets ask students why roller coasters use a gentle slope to the top of the first hill, and I recast this into having students measure the force needed to pull a cart up an incline to a given height (the height of the first hill) -- and the corresponding distance required for different slopes of the incline.

But I wasn't off the hook so easily on this one. My reviewers protested that I hadn't included the gravitational potential energy, which I was embarrassed to find was not negligible. But the fact that the data were so good kept gnawing at me. Then I realized the answer. Gravitational potential energy adds a linear term in the displacement, and adding a linear term to a quadratic term still gives a parabola, only with a shifted vertex. I was able to show that

the quadratic dependence on displacement was really about the equilibrium point y = -mg/k and that

(1/2)k(y + mg/k)2 = (1/2)ky2 + mgy + (1/2)m2g2/2k,

with the first term being elastic potential energy for displacement of a spring with no weight suspended from it, the second term being the weight's gravitational energy, and the third term just a constant (of no significance in defining potential energy).

I next wanted to show the transformation of gravitational potential or kinetic energy into other forms, such as electrical and thermal. I knew I could show transformation from electric to thermal by the "electrical equivalent of heat" experiment, which I had done for years -- except that I used to use it as a way to measure the correspondence between number of calories (or Calories) of thermal energy output vs. number of joules electrical energy input. Now, though, that calories are "out," I was finding embarrassment in having more joules of thermal energy output than electrical energy input. If I was going to put this in a PTRA manual, I'd have to get this bug out.

I'm telling you about this, in case you have had a similar problem. What I did one afternoon was to set up four electrical equivalent of heat experiments, with four different models of DC power supply, and I found that one gave me reasonable results, while the other three gave me the excess thermal energy output described above. Rotations among the electric meters caused no change, and I was led to conclude that it was the DC power supplies that underlay the

problem. My belief in this was strengthened when an oscilloscope showed that the power supplies yielding excess thermal energy output produced only doubly rectified DC power, while the power supply that had given reasonable results provided DC current that had been further "smoothed out."

I would welcome an explanation from any listener of why the DC power supplies

furnishing doubly rectified DC would give meter readings leading me to the appearance of excess thermal energy output, but I decided that these power supplies presented a complication I didn't want to deal with, and I scurried off to buy me some immersion coils.

But, to keep a continuous chain of energy transformations, I needed to show the transformation from gravitational potential or kinetic energy to electric. The one activity that I came up with to measure all the necessary quantities for both was to energize a motor with D-cells to lift a known mass. I could measure the electrical energy used from the voltage, current, and time, and the gravitational potential energy gained from the mass and the distance through which it was lifted. But, alas, the largest percentage of the electrical energy I could convert to gravitational potential energy was 11%. It made me wonder how energy ever became considered to be a conserved quantity, anyway -- to the extent that we were willing to wait a quarter century between the hypothesis and discovery of a particle which would preserve its conservation!

This taught me something else, too -- that the presence of the Second Law of Thermodynamics is as with us just as much as the First. Only when the energy transformation is to thermal energy can we be assured of 100% transformation efficiency -- and even then what we are left with is a measurement of specific heat. The electrical equivalent of heat experiment really leads us to a measurement of the specific heat of water, and the alternative I had to resort to to complete the chain connecting mechanical, electrical, and thermal energy -- the conventional experiment of measuring temperature increase in metal shot after hundreds of inversions in a container (made possible in smaller containers by temperature probes measuring to the hundredths of a degree) -- ends up with measurements of specific heats of metals. The conservation of energy among its many forms outside the mechanical realm seems to rest upon the fact that all of our experiments transforming energy to thermal form have led to a

self-consistent set of measured values for specific heats.

It is the Second Law of Thermodynamics, too, that makes energy an important concept in society as well as in physics. After all, if we had only the First Law to worry about, we wouldn't have to worry: energy might not be created, but it isn't destroyed either. All the energy in the world today would continue to be available to us.

But for energy to meet our needs, it must be transformed -- e.g., we need to increase the thermal energy in our homes in winter, and we need a lot of energy brought to our appliances by electrons in electric current if they are to operate. The Second Law of Thermodynamics tells us that when energy is transformed, some of it gets transformed to a form that is less useful (the most typical example of this is "waste heat"). Energy "sources" are more useful forms of energy that can be transformed to meet our needs. When we "produce" energy, what we are really doing is to transform useful energy from these energy "sources" to a form that meets our needs. When we "use" these energy "sources,” energy in a form that met our needs is transformed to a less useful form. When we "conserve" energy, we "use" the smallest amount of an energy "source" to accomplish a particular task.

An important plan for any energy future is to "conserve" as much as we can, but "conserve" as much as it might, an industrial society still needs to "use" new "sources" of energy – to heat and cool its buildings, to run its appliances, to move its people, and to manufacture its goods. Because of their convenience, the "sources" of choice for more than a hundred years have been fossil fuels, the fuels I ask my students to plan their future without.

Why? Not just because a shortage of fossil fuels got us into trouble in 1973 – and again in 1979. Not just because burning fossil fuels produces carbon dioxide which leads to global warming. More fundamentally, we're eventually going to run out of them. Their continued use to support an ever-increasing population is not "sustainable" -- in the sense that our use of them denies future generations the benefits of their use (and as a manufacturing material as well as an energy "source").

Twenty years after the 1973 Arab Oil Embargo I took a retrospective look at what our actions showed we had learned from it. I learned that US total energy "use" had declined the years immediately following the energy crises of 1973 and 1979, that US energy use through 1990 had fallen below a host of predictions, but that most of the reduction was due to the industrial sector. But little had been done to wean us from our diet of fossil fuels.

The Solar Energy Research Institute was charged at its founding in 1977 to meet 20% of US energy needs from renewable sources by 2000. It was renamed the National Renewable Energy Laboratory (NREL) in 1991. I thought that this 30-year anniversary of the Arab Oil Embargo might be a good time to find out whether this goal had been met.

Data for US fossil fuel and total energy use are plotted on Figures 8 and 10. Both graphs show a decline following the energy crisis years of 1973 and 1979 and that both fossil fuel and total energy use had climbed back to their peak 1979 values a decade later and continue to climb. But, while fossil fuel use doubled from 1949 to 1968, it has not increased even 50% more than the 1968 usage since then. And not until 2000 did petroleum use climb back to its

1979 peak.

But the fact that we have put the brakes on increasing our petroleum use more than for other fossil fuels since the energy crises of the 1970s is no overt cause for rejoicing. For while imports still comprise only a small fraction of the coal (1.5%) and natural gas (20%) that we use, the fraction of petroleum imported passed 50% in 1990. M. King Hubbert, whose ability to forecast future fossil fuel production in terms of past data was legendary, wrote in the September 1971 Scientific American2 that "In the case of oil the period of peak production appears to be the present," and he was right.

We've decreased the rate at which our use of energy in general and fossil fuels in particular has increased, but these uses are still increasing. Moreover, the time since the energy crises of the 1970s have seen a decline in US production of petroleum and continually increasing imports.

How're we doing on renewables? Did NREL achieve the goal of 20% of US energy from renewable sources by 2000? Fig. 9 plots energy from conventional hydroelectricity, biomass, geothermal, and solar, and only since 1988 has solar gotten up off the t-axis on the graph. Most of our renewable energy continues to come from the two sources that have played the leading role even before renewable energy was fashionable: hydroelectricity and biomass. Geothermal has also started to make a more significant contribution since the energy crisis

years, although it, too, had been around for a long time, as we learned at last summer's meeting. The total US energy use in Fig. 10 shows an increasing gap between total energy use and fossil fuel use. Although no new nuclear reactors have been erected since Three Mile Island in 1979, nuclear electricity continues to play an increasing role, and this has increased to be just a little greater than renewables.

In 1979 the Ford Foundation-sponsored study, Energy: The Next Twenty Years,

opened with the following statement:

More than half a decade has passed since the oil crisis of 1973-1974 signaled a new era in U.S. and world history. The effort to develop a satisfactory policy response to what was once characterized as the "moral equivalent of war" has stretched out so long that weariness rather than vigor characterizes the national debate. . . . energy and environmental objectives seem irreconcilable; . . . a national consensus that solar energy is a good thing has yet to result in significant resource commitments, while support for nuclear energy, yesterday's hope for tomorrow, is eroding; and coal is marking time. Meanwhile, the slow, steady increase in the number of barrels of oil imported . . . provide[s] reminders that much needs to be done.3

I don't think it would stretch the imagination to replace "more than half a decade" in this statement with "three decades." In that time we have not learned the lessons of the energy crises, nor have we met the well-intentioned goal of 20% of our energy from renewable sources by 2000. In fact, at the World Summit on Sustainable Development in Johannesburg last year the leaders of the world could not agree to increase the percentage of the world's energy use from renewables to 15% by 2010. Last fall when I presented my ninth graders the challenge of the new Active Physics-formatted chapter I wrote on energy issues, I told them that I was asking them to do what the leaders of the world were unwilling to commit to: plan their energy future without fossil fuels.

In the year 2010 those ninth graders will be graduating from college and begin to take their place in the world. If the leaders of the world, more preoccupied with the politics of the present when they should be framing a forward-looking vision of the future, haven't figured out how to produce 15% of the world's energy by renewable means by then, I hope that the next generation will be better trained to deal with this problem.


1. John L. Roeder, "Active Physics Chapters on Energy," AAPT Announcer, 32(2), 95 (Summer 2002)

2. M. King Hubbert, "The Energy Resources of the Earth," in Energy and Power (Freeman, San Francisco, 1971)

3. Hans H. Landsberg, et al., Energy: The Next Twenty Years (Ballinger, Cambridge, MA, 1979)

rumus spherometer

Memanggil ini h jarak, dan jarak antara kaki, jari-jari R diberikan oleh rumus:

klikdisini tentang rumus spherometer
Wednesday, December 15, 2010

Bakteri yang merugikan manusia

1 Treponema pallidum penyebab p. sifilis

Treponema pallidum merupakan salah satu bakteri spirochaeta. Bakteri ini berbentuk spiral. Terdapat empat subspesies yang sudah ditemukan, yaitu Treponema pallidum pallidum, Treponema pallidum pertenue, Treponema pallidum carateum, dan Treponema pallidum endemicum. Tulisan ini akan membahas Treponema pallidum pallidum yang merupakan penyebab sifilis. Treponema pallidum pallidum merupukan spirochaeta yang bersifat motile yang umumnya menginfeksi melalui kontak seksual langsung, masuk ke dalam tubuh inang melalui celah di antara sel epitel. Organisme ini juga dapat ditularkan kepada janin melalui jalur transplasental selama masa-masa akhir kehamilan. Struktur tubuhnya yang berupa heliks memungkinkan Treponema pallidum pallidum bergerak dengan pola gerakan yang khas untuk bergerak di dalam medium kental seperti lender (mucus). Dengan demikian organisme ini dapat mengakses sampai ke sistem peredaran darah dan getah bening inang melalui jaringan dan membran mucosa. Pada tanggal 17 Juli 1998, suatu jurnal melaporkan sekuensi genom dari Treponema pallidum. Treponema pallidum pallidum adalah bakteri yang memiliki genom bakterial terkecil pada 1,14 juta base pairs (Mb) dan memiliki kemampuan metabolisme yang terbatas, serta mampu untuk beradaptasi dengan berbagai macam jaringan tubuh mamalia.

2. Mycobacterium leprae penyebab p. lepra

Mycobacterium leprae, juga disebut Basillus Hansen, adalah bakteri yang menyebabkan penyakit kusta (penyakit Hansen).[Bakteri ini merupakan bakteri intraselular. M. leprae merupakan gram-positif berbentuk tongkat. Mycobacterium leprae mirip dengan Mycobacterium tuberculosis dalam besar dan bentuknya.

3. Salmonella typhosa penyebab p. tifus

Di dunia kedokteran, penyakit typhus dikenal juga dengan nama thypus abdominallis. Typhus abdominallis merupakan penyakit peradangan pada usus yang disebabkan oleh infeksi bakteri. Typhus merupakan salah satu bentuk salmonellosis yaitu penyakit yang disebabkan oleh infeksi Salmonella. Inkubasi kuman penyebab typhus dapat terjadi melalui makanan dan minuman yang terinfeksi oleh bakteri Salmonella typhosa. Kuman ini masuk melalui mulut terus ke lambung lalu ke usus halus. Di usus halus, bakteri ini memperbanyak diri lalu dilepaskan kedalam darah, akibatnya terjadi panas tinggi.

Penyakit typhus abdominallis sangat cepat penularannya yaitu melalui kontak dengan seseorang yang menderita penyakit typhus, kurangnya kebersihan pada minuman dan makanan, susu dan tempat susu yang kurang kebersihannya menjadi tempat untuk pembiakan bakteri salmonella, pembuangan kotoran yang tak memenuhi syarat dan kondisi saniter yang tidak sehat menjadi faktor terbesar dalam penyebaran penyakit typhus.
Sunday, December 12, 2010

kekentalan zat cair

Fluida adalah suatu zat yang mempunyai kemampuan berubah secara kontinue apabila mengalami geseran, atau mempunyai reaksi terhadap tegangan geser sekecil apapun dalam keadaan diam atau dalam keadaan keseimbangan, fluida tidak mampu menahan gaya geser yang bekerja padanya,dan oleh sebab itu fluida mudah berubahbentuk tanpa pemisahan massa.

Viskositas atau kekentalan dari suatu cairan adalah salah satu Sifat cairan yang menentukan besarnya perlawanan terhadap gayageser. Viskositas terjadi terutama karena adanya interaksi antara molekul-molekul cairan

Semua fluida nyata (gas dan zat cair) memiliki sifat-sifat khusus yang dapat diketahui, antara lain: rapat massa (density), kekentalan (viscosity), kemampatan(compressibility), tegangan permukaan (surface tension), dan kapilaritas(capillarity). Beberapa sifat fluida pada kenyataannya merupakan kombinasi dari sifat-sifat fluida lainnya. Sebagai contoh kekentalan kinematik melibatkan kekentalan dinamik dan rapat massa. Sejauh yang kita ketahui, fluida adalah gugusan yang tersusun atas molekulmolekul dengan jarak pisah yang besar untuk gas dan kecil untuk zat cair. Molekul-molekul itu tidak terikat pada suatu kisi, melainkan saling bergerak bebas terhadap satu sama lain.

Rapat massa adalah ukuran konsentrasi massa zat cair dan dinyatakan
dalam bentuk massa persatuan volume .
Rapat massa air (rair) pada suhu 4 oC dan pada tekanan atmosfer (patm) adalah
1000 kg/m3.
Berat jenis adalah berat benda persatuan volume pada temperatur dan
tekanan tertentu, dan berat suatu benda adalah hasil kali antara rapat massa (r )
dan percepatan gravitasi .
Rapat relatif (s) adalah perbandingan antara rapat massa suatu zat dan rapat
massa air (rair), atau perbandingan antara berat jenis suatu zat dan berat jenis
air .Karena pengaruh temperatur dan tekanan pada rapat massa zat cair sangat kecil,
maka dapat diabaikan sehingga rapat massa zat cair dapat dianggap tetap.

Kekentalan adalah sifat dari zat cair untuk melawan tegangan geser (t) pada
waktu bergerak atau mengalir. Kekentalan disebabkan adanya kohesi antara
partikel zat cair sehingga menyebabkan adanya tegangan geser antara molekulmolekul
yang bergerak. Zat cair ideal tidak memiliki kekentalan. Kekentalan zat
cair dapat dibedakan menjadi dua yaitu kekentalan dinamik (μ) atau kekentalan
absolute dan kekentalan kinematis (n).
Zat cair Newtonian adalah zat cair yang memiliki tegangan geser (t)
sebanding dengan gradien kecepatan normal terhadap arah aliran. Gradien
kecepatan adalah perbandingan antara perubahan kecepatan dan perubahan jarak
tempuh aliran.

Fluida yang riil memiliki gesekan internal yang besarnya tertentu yang disebut dengan viskositas. Viskositas ada pada zat cair maupun gas dan pada intinya merupakan gaya gesekan antara lapisan-lapisan yang bersisian pada fluida pada waktu lapisan-lapisan tersebut bergerak satu melewati lainnya. Dengan adanya viskositas, kecepatan lapisan-lapisan fluida tidak seluruhnya sama. Lapisan fluida yang terdekat dengan dinding pipa bahkan sama sekali tidak bergerak (v = 0), sedangkan lapisan fluida pada pusat aliran memiliki kecepatan terbesar. Pada zat cair, viskositas disebabkan akibat adanya gaya-gaya kohesi antar molekul.
Dalam fluida ternyata gaya yang dibutuhkan (F), sebanding dengan luas fluida yang bersentuhan dengan setiap lempeng (A), dan dengan laju (v) dan berbanding terbalik dengan jarak antar lempeng (l). Besar gaya F yang diperlukan untuk menggerakan suatu lapisan fluid dengan kelajuan tetap v untuk luas penampang keping A adalah

F = η A v

Dengan viskositas didefinisikan sebagai perbandingan regangan geser (F/A) dengan laju perubahan regangan geser (v/l).

Dengan kata lain dapat dikatakan bahwa :
Makin besar luas keping (penampang) yang bersentuhan dengan fluida, makin besar gaya F yang diperlukan sehingga gaya sebanding dengan luas sentuh (F ≈ A). Untuk luas sentuh A tertentu, kelajuan v lebih besar memerlukan gaya F yang lebih besar, sehingga gaya sebanding dengan kelajuan (F ≈ v).

Hukum Stokes
Viskositas dalam aliran fluida kental sam saja dengan gesekan pada gerak benda padat. Untuk fluida ideal, viskositas η = 0 sehingga kita selalu menganggap bahwa benda yang bergerak dalam fluida ideal tidak mengalami gesekan yang disebabkan fluida. Akan tetapi, bila benda tersebut bergerak dengan kelajuan tertentu dalam fluida kental, maka benda tersebut akan dihambat geraknya oleh gaya gesekan fluida benda tersebut. Besar gaya gesekan fluida telah dirumuskan

F = η A v = A η v = k η v
l l

Koefisien k tergantung pada bentuk geometris benda. Untuk benda yang bentuk geometrisnya berupa bola dengan jari-jari (r), maka dari perhitungan laboraturium ditunjukan bahwa
k = 6 п r
F = 6 п η r v
Persamaan itulah yang hingga kini dikenal dengan Hukum Stokes.

Dengan menggunakan hukum stokes, maka kecepatan bola pun dapat diketahui melalui persamaan (rumus) :
v = 2 r2 g (ρ – ρ0)
9 η
Tuesday, December 7, 2010

Zat Tepung ( Amilum)

Pati atau amilum adalah karbohidrat yang terdiri dari sejumlah besar glukosa unit bergabung bersama oleh obligasi glikosidik . Ini polisakarida diproduksi oleh semua hijau tanaman sebagai menyimpan energi. Ini adalah karbohidrat paling penting dalam diet manusia dan terkandung di dalam seperti makanan pokok seperti kentang , gandum , jagung (jagung), padi , dan ubi kayu .

pati murni adalah bubuk putih, tawar dan tidak berbau yang tidak larut dalam air dingin atau alkohol. Ini terdiri dari dua jenis molekul: linear dan spiral amilosa dan bercabang amilopektin . Tergantung pada tanaman, pati umumnya berisi 20 sampai 25% amilosa dan 75 sampai 80% amilopektin. Glikogen , glukosa toko hewan, adalah versi lebih bercabang amilopektin.

Pati diolah untuk menghasilkan banyak gula dalam makanan olahan. Ketika dilarutkan dalam air hangat, dapat digunakan sebagai suatu penebalan, kaku atau mengelem agen, memberikan wheatpaste .
Thursday, December 2, 2010

Hukum Pewarisan Mendel

Dari Wikipedia bahasa Indonesia, ensiklopedia bebas

Alel/gen dominan dan resesif pada orang tua (1, P), anak (2, F1) dan cucu (3, F2) menurut Mendel

Hukum pewarisan Mendel adalah hukum mengenai pewarisan sifat pada organisme yang dijabarkan oleh Gregor Johann Mendel dalam karyanya 'Percobaan mengenai Persilangan Tanaman'. Hukum ini terdiri dari dua bagian:

  1. Hukum pemisahan (segregation) dari Mendel, juga dikenal sebagai Hukum Pertama Mendel, dan
  2. Hukum berpasangan secara bebas (independent assortment) dari Mendel, juga dikenal sebagai Hukum Kedua Mendel.

Hukum segregasi (hukum pertama Mendel)

Perbandingan antara B (warna coklat), b (warna putih), S (buntut pendek), dan s (buntut panjang) pada generasi F2

Hukum segregasi bebas menyatakan bahwa pada pembentukan gamet (sel kelamin), kedua gen induk (Parent) yang merupakan pasangan alel akan memisah sehingga tiap-tiap gamet menerima satu gen dari induknya.

Secara garis besar, hukum ini mencakup tiga pokok:

  1. Gen memiliki bentuk-bentuk alternatif yang mengatur variasi pada karakter turunannya. Ini adalah konsep mengenai dua macam alel; alel resisif (tidak selalu nampak dari luar, dinyatakan dengan huruf kecil, misalnya w dalam gambar di sebelah), dan alel dominan (nampak dari luar, dinyatakan dengan huruf besar, misalnya R).
  2. Setiap individu membawa sepasang gen, satu dari tetua jantan (misalnya ww dalam gambar di sebelah) dan satu dari tetua betina (misalnya RR dalam gambar di sebelah).
  3. Jika sepasang gen ini merupakan dua alel yang berbeda (Sb dan sB pada gambar 2), alel dominan (S atau B) akan selalu terekspresikan (nampak secara visual dari luar). Alel resesif (s atau b) yang tidak selalu terekspresikan, tetap akan diwariskan pada gamet yang dibentuk pada turunannya.

Hukum asortasi bebas (hukum kedua Mendel)

Hukum kedua Mendel menyatakan bahwa bila dua individu mempunyai dua pasang atau lebih sifat, maka diturunkannya sepasang sifat secara bebas, tidak bergantung pada pasangan sifat yang lain. Dengan kata lain, alel dengan gen sifat yang berbeda tidak saling mempengaruhi. Hal ini menjelaskan bahwa gen yang menentukan e.g. tinggi tanaman dengan warna bunga suatu tanaman, tidak saling mempengaruhi.

Seperti nampak pada gambar 1, induk jantan (tingkat 1) mempunyai genotipe ww (secara fenotipe berwarna putih), dan induk betina mempunyai genotipe RR (secara fenotipe berwarna merah). Keturunan pertama (tingkat 2 pada gambar) merupakan persilangan dari genotipe induk jantan dan induk betinanya, sehingga membentuk 4 individu baru (semuanya bergenotipe wR). Selanjutnya, persilangan/perkawinan dari keturuan pertama ini akan membentuk indidividu pada keturunan berikutnya (tingkat 3 pada gambar) dengan gamet R dan w pada sisi kiri (induk jantan tingkat 2) dan gamet R dan w pada baris atas (induk betina tingkat 2). Kombinasi gamet-gamet ini akan membentuk 4 kemungkinan individu seperti nampak pada papan catur pada tingkat 3 dengan genotipe: RR, Rw, Rw, dan ww. Jadi pada tingkat 3 ini perbandingan genotipe RR , (berwarna merah) Rw (juga berwarna merah) dan ww (berwarna putih) adalah 1:2:1. Secara fenotipe perbandingan individu merah dan individu putih adalah 3:1.

Kalau contoh pada gambar 1 merupakan kombinasi dari induk dengan satu sifat dominan (berupa warna), maka contoh ke-2 menggambarkan induk-induk dengan 2 macam sifat dominan: bentuk buntut dan warna kulit. Persilangan dari induk dengan satu sifat dominan disebut monohibrid, sedang persilangan dari induk-induk dengan dua sifat dominan dikenal sebagai dihibrid, dan seterusnya.

Pada gambar 2, sifat dominannya adalah bentuk buntut (pendek dengan genotipe SS dan panjang dengan genotipe ss) serta warna kulit (putih dengan genotipe bb dan coklat dengan genotipe BB). Gamet induk jantan yang terbentuk adalah Sb dan Sb, sementara gamet induk betinanya adalah sB dan sB (nampak pada huruf di bawah kotak). Kombinasi gamet ini akan membentuk 4 individu pada tingkat F1 dengan genotipe SsBb (semua sama). Jika keturunan F1 ini kemudian dikawinkan lagi, maka akan membentuk individu keturunan F2. Gamet F1nya nampak pada sisi kiri dan baris atas pada papan catur. Hasil individu yang terbentuk pada tingkat F2 mempunyai 16 macam kemungkinan dengan 2 bentuk buntut: pendek (jika genotipenya SS atau Ss) dan panjang (jika genotipenya ss); dan 2 macam warna kulit: coklat (jika genotipenya BB atau Bb) dan putih (jika genotipenya bb). Perbandingan hasil warna coklat:putih adalah 12:4, sedang perbandingan hasil bentuk buntut pendek:panjang adalah 12:4. Perbandingan detail mengenai genotipe SSBB:SSBb:SsBB:SsBb: SSbb:Ssbb:ssBB:ssBb: ssbb adalah 1:2:2:4: 1:2:1:2: 1.

Contoh ke-3, dengan 1 faktor dominan warna: putih dan merah

kalau ingin informasi biologi klik di biologi
tugas | © 2010 by | Supported by Promotions And Coupons Shopping & WordPress Theme 2 Blog | Tested by Blogger Templates | Best Credit Cards