Search for Physics Experiments and Tutorials

Custom Search

Monday, February 1, 2010

Physics in Everyday Life

IV-Romans learned Optics, Thermodynamics, Mechanics, Electricity and Magnetism, Nuclear Physics and Electronics. What are those? Which is relevant in your life? Did you enjoy the Physics Class? What is your physics topic?

73 comments:

  1. Topic : * BUOYANCY


    Buoyancy is the ability of an object to float in a liquid, such as water. This concept helps to explain why some things float while other objects sink. Buoyancy is an important factor in the design of many objects and in a number of water-based activities, such as boating or scuba diving.


    The mathematician Archimedes discovered much of how buoyancy works almost 2000 years ago. In his research, Archimedes discovered that an object is buoyed up by a force equal to the weight of the water displaced by the object. In other words, a inflatable boat that displaces 100 pounds (45 kilograms) of water is buoyed up by that same weight of support. An object that floats in the water is known as being positively buoyant. An object that sinks to the bottom is negatively buoyant, while an object that hovers at the same level in the water is neutrally-buoyant.

    This same idea helps to determine what will float in water and what will sink. If an object weighs more than the weight of the water it displaces, it will sink. If the object weighs less, it will float. This helps explain why a heavy ship can easily float in the water, while a much smaller and lighter brick will sink quickly. It isn't the size or shape of an object that primarily determines buoyancy, but the relation between an object's weight compared to the weight of the water the object displaces.


    Any object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object.
    – Archimedes of Syracuse

    Buoyancy is important in a surprising number of fields. Designers and engineers must design boats, ships and seaplanes in a way that ensures that they remain afloat. In the case of submarines, experts developed ways to make them sink and bring them back to the surface. Many objects were developed with buoyancy in mind, such as life preservers and pontoons. Buoyancy affects many more things than most people imagine.


    Additionally, buoyancy is very important in a number of water-related sports. Many swimmers know that there are easy ways to float at the surface, such as laying on a person's back or holding a full breath. Buoyancy becomes noticeable when a swimmer tries to dive to the bottom of the pool, which can take effort. Scuba divers work with many buoyancy issues, as divers must know how to float, hover and sink in the water. In fact, scuba divers often wear extra lead weights to counteract the positive buoyancy of their bodies and gear.




    example :

    A block of mass M=1.424kg is floating stationary in a beaker of water. A string connected to the bottom of the beaker holds the block in place. The tension in the string is 7.70 N. The bottom of the block is floating a distance h=1.63cm above the bottom of the beaker.

    1. What is the magnitude and direction of the buoyant force acting on the block?

    2. What is the density of the block?


    T = tension in string = 7.7 N
    Fg = force of gravity on block
    Fb = buoyant force on block

    T + Fg = Fb and T = Fb - Fg

    M = mass of the block = 1.424 kg
    p = density of the block
    V = volume of the block = M/p

    Fb = weight of fluid displaced = V*(density of water)*g = V*1000*g
    Fg = M*g = V*p*g

    T = V*1000*g - M*g = 1000*M*g/p - M*g
    T = (M*g)(1000/p - 1)
    T/(M*g) = 1000/p - 1
    p = 1000/[T/(M*g) + 1]
    p = 644.66 kg/m^3

    So the density of the block is 644.66 kg/m^3

    Fg = M*g = 13.97 N
    Fb = T + Fg = 7.7 + 13.97 = 21.67 N

    ReplyDelete
  2. *BUOYANCY

    STABILITY

    A floating object is stable if it tends to restore itself to an equilibrium position after a small displacement. For example, floating objects will generally have vertical stability, as if the object is pushed down slightly, this will create a greater buoyant force, which, unbalanced against the weight force will push the object back up.
    Rotational stability is of great importance to floating vessels. Given a small angular displacement, the vessel may return to its original position (stable), move away from its original position (unstable), or remain where it is (neutral).
    Rotational stability depends on the relative lines of action of forces on an object. The upward buoyant force on an object acts through the centre of buoyancy, being the centroid of the displaced volume of fluid. The weight force on the object acts through its center of gravity. An object will be stable if an angular displacement moves the line of action of these forces to set up a 'righting moment.



    DENSITY

    A pound coin floats in mercury due to the buoyant force upon it.
    If the weight of an object is less than the weight of the displaced fluid when fully submerged, then the object has an average density that is less than the fluid and has a buoyancy that is greater than its own weight. If the fluid has a surface, such as water in a lake or the sea, the object will float at a level where it displaces the same weight of fluid as the weight of the object. If the object is immersed in the fluid, such as a submerged submarine or air in a balloon, it will tend to rise. If the object has exactly the same density as the fluid, then its buoyancy equals its weight. It will remain submerged in the fluid, but it will neither sink nor float. An object with a higher average density than the fluid has less buoyancy than weight and it will sink. A ship will float even though it may be made of steel (which is much denser than water), because it encloses a volume of air (which is much less dense than water), and the resulting shape has an average density less than that of the water.

    ReplyDelete
  3. Surface tension is a phenomenon in which the surface of a liquid, where the liquid is in contact with gas, acts like a thin elastic sheet. This term is typically used only when the liquid surface is in contact with gas (such as the air). If the surface is between two liquids (such as water and oil), it is called "interface tension."

    Surface tension (denoted with the Greek variable gamma) is defined as the ratio of the surface force F to the length d along which the force acts:

    gamma = F / d
    Units of Surface Tension
    Surface tension is measured in SI units of N/m (newton per meter), although the more common unit is the cgs unit dyn/cm (dyne per centimeter).

    In order to consider the thermodynamics of the situation, it is sometimes useful to consider it in terms of work per unit area. The SI unit in that case is the J/m2 (joules per meter squared). The cgs unit is erg/cm2.

    These forces bind the surface particles together. Though this binding is weak - it's pretty easy to break the surface of a liquid after all - it does manifest in many ways.

    Examples of Surface Tension:
    *Drops of water
    *Insects walking on water
    * Needle (or paper clip) floating on water

    --Maricor San Juan

    ReplyDelete
  4. * Acceleration *


    Acceleration is defined as the rate of change of velocity. Acceleration is inherently a vector quantity, and an object will have non-zero acceleration if its speed and/or direction is changing.


    The operation of subtracting the initial from the final velocity must be done by vector addition since they are inherently vectors.


    The units for acceleration can be implied from the definition to be meters/second divided by seconds, usually written m/s2.


    The instantaneous acceleration at any time may be obtained by taking the limit of the average acceleration as the time interval approaches zero.


    The average acceleration is the ratio between the change in velocity and the time interval.


    For example, if a car moves from the rest to 5 m/s in 5 seconds, its average acceleration is

    a=1m/s²


    An instantaneous acceleration is the change in velocity at one moment.


    Calculating acceleration involves dividing velocity by time — or in terms of units, dividing meters per second [m/s] by second [s]. Dividing distance by time twice is the same as dividing distance by the square of time. Thus the SI unit of acceleration is the meter per second squared.


    Another frequently used unit is the acceleration due to gravity — g. Since we are all familiar with the effects of gravity on ourselves and the objects around us it makes for a convenient standard for comparing accelerations. Everything feels normal at 1 g, twice as heavy at 2 g, and weightless at 0 g. This unit has a very precise definition (g = 9.80665 m/s2) but for everyday use 9.8 m/s2 is sufficient.

    The unit called acceleration due to gravity (represented by a roman g) is not the same as the natural phenomena called acceleration due to gravity (represented by an italic g). The former has a defined value whereas the latter has to be measured.

    Although the term "g force" is often used, the g is a measure of acceleration, not force. (More on this later.) Of particular concern to humans are the physiological effects of acceleration.






    / > Ana Marie Alvarado ..

    ReplyDelete
  5. ** V O L T A G E **

    Voltage, also called electromotive force, is a quantitative expression of the potential difference in charge between two points in an electrical field. The greater the voltage, the greater the flow of electrical current (that is, the quantity of charge carriers that pass a fixed point per unit of time) through a conducting or semiconducting medium for a given resistance to the flow. Voltage is symbolized by an uppercase italic letter V or E. The standard unit is the volt, symbolized by a non-italic uppercase letter V. One volt will drive one coulomb (6.24 x 1018) charge carriers, such as electrons, through a resistance of one ohm in one second.
    Voltage can be direct or alternating. A direct voltage maintains the same polarity at all times. In an alternating voltage, the polarity reverses direction periodically. The number of complete cycles per second is the frequency, which is measured in hertz (one cycle per second), kilohertz, megahertz, gigahertz, or terahertz. An example of direct voltage is the potential difference between the terminals of an electrochemical cell. Alternating voltage exists between the terminals of a common utility outlet.

    A voltage produces an electrostatic field, even if no charge carriers move (that is, no current flows). As the voltage increases between two points separated by a specific distance, the electrostatic field becomes more intense. As the separation increases between two points having a given voltage with respect to each other, the electrostatic flux density diminishes in the region between them.


    Voltage is electric potential energy per unit charge, measured in joules per coulomb ( = volts). It is often referred to as "electric potential", which then must be distinguished from electric potential energy by noting that the "potential" is a "per-unit-charge" quantity. Like mechanical potential energy, the zero of potential can be chosen at any point, so the difference in voltage is the quantity which is physically meaningful. The difference in voltage measured when moving from point A to point B is equal to the work which would have to be done, per unit charge, against the electric field to move the charge from A to B.


    Voltage is commonly used as a short name for electrical potential difference. Its corresponding SI unit is the volt (symbol: V, not italicized). Electric potential is a hypothetically measurable physical dimension, and is denoted by the algebraic variable V (italicized).



    The voltage between two (electron) positions "A" and "B", inside a solid electrical conductor (or inside two electrically-connected, solid electrical conductors), is denoted by (VA − VB). This voltage is the electrical driving force that drives a conventional electric current in the direction A to B. Voltage can be directly measured by a voltmeter. Well-constructed, correctly used, real voltmeters approximate very well to ideal voltmeters. An analogy involving the flow of water is sometimes helpful in understanding the concept of voltage
    ..



    ** J A I Z E L :X

    ReplyDelete
  6. COLLISION


    Collisions involve forces (there is a change in velocity). Collisions can be elastic, meaning they conserve energy and momentum, inelastic, meaning they conserve momentum but not energy, or totally inelastic (or plastic), meaning they conserve momentum and the two objects stick together.

    The magnitude of the velocity difference at impact is called the closing speed.

    The field of dynamics is concerned with moving and colliding objects.


    A perfectly elastic collision is defined as one in which there is no loss of kinetic energy in the collision. An inelastic collision is one in which part of the kinetic energy is changed to some other form of energy in the collision. Any macroscopic collision between objects will convert some of the kinetic energy into internal energy and other forms of energy, so no large scale impacts are perfectly elastic. Momentum is conserved in inelastic collisions, but one cannot track the kinetic energy through the collision since some of it is converted to other forms of energy. Collisions in ideal gases approach perfectly elastic collisions, as do scattering interactions of sub-atomic particles which are deflected by the electromagnetic force. Some large-scale interactions like the slingshot type gravitational interactions between satellites and planets are perfectly elastic. Collisions between hard spheres may be nearly elastic, so it is useful to calculate the limiting case of an elastic collision. The assumption of conservation of momentum as well as the conservation of kinetic energy makes possible the calculation of the final velocities in two-body collisions.


    Let the linear, angular and internal momenta of a molecule be given by the set of r variables { pi }. The state of a molecule may then be described by the range δwi = δp1δp2δp3 ... δpr. There are many such ranges corresponding to different states; a specific state may be denoted by the index i. Two molecules undergoing a collision can thus be denoted by (i, j) (Such an ordered pair is sometimes known as a constellation.)





    *KATH'Z*

    ReplyDelete
  7. TOPIC TOPIC :[ELECTRIC CURRENT]: TOPIC TOPIC
    ______________________________________________

    -->In electric circuits,the energy carriers are free electrons or electrons loosely bound to the atomic nuclei. The coulomb (C) is the basic unit of electric charge. To produce 1 coulomb of electric charge, 6.3 x 10^18 electrons are required.

    -->Electric current is measured by determining the of charges (q) passing through a perpendicular cross-section of the conductor per unit time (t).In equation form,this gives

    l= q/t
    -->The unit of electric current is the ampere (A).It is equivalent to 1 coulomb of charge passing through a cross-section of the conductor per second.In other words, if 6.3 x 10^18 electrons (1 coulomb) pass a cross-section of the conductor in 1 second,the electric current is 1 ampere.The units are named after the French scientists,Charles Coulomb and Andre Ampere.

    1 ampere = 1 coulomb/second

    -->Although electrons move from the repelling negative terminal toward the attracting positive terminal of a source,it has been agreed that the direction of conventional current is from the positive to the negative terminal.

    -->Electric current may be AC or DC.Direct current or DC is made up of electrons flowing in one direction.A battery produces direct current in a circuit because the terminals of the battery always have the same opposite charge.

    -->Generators in the power plants produce alternating currents (AC).This is the kind of current in the household circuit.In alternating current,there is a continuous back and forth movement of electrons in the circuit.Nearly all commercial AC circuits involve current that alternate back and forth at a frequency of 60 cycles per second.

    ****AMMETER****
    ===>An ammeter measures current.The positive terminal of an ammeter is connected to the positive terminal of the energy source.Its negative terminal is connected to the negative terminal of the energy source.

    ------>[ MELVIN NAPOLES ]<------

    ReplyDelete
  8. This comment has been removed by the author.

    ReplyDelete
  9. Law Of Motion:

    Newton's laws of motion are three physical laws that form the basis for classical mechanics. They have been expressed in several different ways over nearly three centuries, and can be summarised as follows:

    1.In the absence of a net force, a body either is at rest or moves in a straight line with constant speed.

    2.A body experiencing a force F experiences an acceleration a related to F by F = ma, where m is the mass of the body. Alternatively, force is equal to the time derivative of momentum.

    3.Whenever a first body exerts a force F on a second body, the second body exerts a force −F on the first body. F and −F are equal in magnitude and opposite in direction.

    These laws describe the relationship between the forces acting on a body and the motion of that body. They were first compiled by Sir Isaac Newton in his work Philosophiae Naturalis Principia Mathematica, first published on July 5, 1687. Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the third volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation, explained Kepler's laws of planetary motion.

    Newton's laws of motion are often stated in layman’s terms for simplification and easy recollection as:

    FIRST LAW: "An object in motion will stay in motion and an object at rest will stay at rest unless acted upon by an external force" or "A body persists in a state of uniform motion or of rest unless acted upon by an external force."

    SECOND LAW: "Force equals mass times acceleration" or "F = ma."

    THIRD LAW: "To every action there is an equal and opposite reaction."


    ---> clarkk :)

    ReplyDelete
  10. Topic: Free falling bodies
    In mechanics,state of a body that moves freely in any manner in the presence of gravity.The planets,for example,are in free fall in the gravitational field of the sun. Newton's law show that a body in free fall follows an orbit such that the sum of the gravitational and inertial forces equals zero. this explains why an astronaut in a spacecraft orbiting the earth experiences a conditional weightlessness

    ReplyDelete
  11. EXAMPLE FREE FALLING BODIES:
    A 10 kg. block being held at rest above the ground is released. The block begins to fall under only,the effect of gravity. At the instant that the block is 2.0 meters above the ground, the speed of the block is 2.5 m/s. The block was initially released at a height of how many meters?

    SOLUTION:

    v0 = 0 (initial velocity is 0)
    y = 2.0 m/s
    v = 2.5 m/s (velocity at 2.0 meters above ground)
    m = 10 kg
    g = 9.8 m/s (acceleration due to gravity)

    ReplyDelete
  12. Method One: Conservation of Energy
    This motion exhibits conservation of energy, so you can approach the problem that way. To do this, we'll have to be familiar with three other variables:

    * U = mgy (gravitational potential energy)
    * K = 0.5mv2 (kinetic energy)
    * E = K + U (total classical energy)

    We can then apply this information to get the total energy when the block is released and the total energy at the 2.0 meter above-the-ground point. Since the initial velocity is 0, there is no kinetic energy there, as the equation shows

    E0 = K0 + U0 = 0 + mgy0 = mgy0

    E = K + U = 0.5mv2 + mgy

    by setting them equal to each other, we get:

    mgy0 = 0.5mv2 + mgy

    and by isolating y0 (i.e. dividing everything by mg) we get:

    y0 = 0.5v2 / g + y

    Notice that the equation we get for y0 doesn't include mass at all. It doesn't matter if the block of wood weighs 10 kg or 1,000,000 kg, we will get the same answer to this problem.

    Now we take the last equation and just plug our values in for the variables to get the solution:

    y0 = 0.5 * (2.5 m/s)2 / (9.8 m/s2) + 2.0 m = 2.3 m

    This is an approximate solution, since we are only using two significant figures in this problem.
    Method Two: One-Dimensional Kinematics
    Looking over the variables we know and the kinematics equation for a one-dimensional situation, one thing to notice is that we have no knowledge of the time involved in the drop. So we have to have an equation without time. Fortunately, we have one (although I'll replace the x with y since we're dealing with vertical motion and a with g since our acceleration is gravity):

    v2 = v02+ 2g(x - x0)

    First, we know that v0 = 0. Second, we have to keep in mind our coordinate system (unlike the energy example). In this case, up is positive, so g is in the negative direction.

    v2 = 2g(y - y0)
    v2 / 2g = y - y0
    y0 = -0.5 v2 / g + y

    Notice that this is exactly the same equation that we ended up with in the conservation of energy method. It looks different because one term is negative, but since g is now negative, those negatives will cancel and yield the exact same answer: 2.3 m.

    ReplyDelete
  13. ╔══════════╗
    Telecommunication
    ╚══════════╝

    Telecommunication is the transmission of signals over a distance for the purpose of communication. In earlier times, this may have involved the use of smoke signals, drums, semaphore, flags or heliograph. In modern times, telecommunication typically involves the use of electronic devices such as telephones, television, radio or computers. Early inventors in the field of telecommunication include Alexander Graham Bell, Guglielmo Marconi and John Logie Baird. Telecommunication is an important part of the world economy and the telecommunication industry's revenue was estimated to be $1.2 trillion in 2006.


    History

    •Early telecommunications
    A replica of one of Chappe's semaphore towers in Nalbach
    In the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "the enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London signalling the arrival of Spanish ships.
    In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system (or semaphore line) between Lille and Paris. However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres (six to nineteen miles). As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880.

    ReplyDelete
  14. •Telegraph and telephone
    The first commercial electrical telegraph was constructed by Sir Charles Wheatstone and Sir William Fothergill Cooke and opened on 9 April 1839. Both Wheatstone and Cooke viewed their device as "an improvement to the [existing] electromagnetic telegraph" not as a new device.
    Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837. His code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was successfully completed on 27 July 1866, allowing transatlantic telecommunication for the first time.
    The conventional telephone was invented independently by Alexander Bell and Elisha Gray in 1876. Antonio Meucci invented the first device that allowed the electrical transmission of voice over a line in 1849. However Meucci's device was of little practical value because it relied upon the electrophonic effect and thus required users to place the receiver in their mouth to "hear" what was being said. The first commercial telephone services were set up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London.

    •Radio and television
    In 1832, James Lindsay gave a classroom demonstration of wireless telegraphy to his students. By 1854, he was able to demonstrate a transmission across the Firth of Tay from Dundee, Scotland to Woodhaven, a distance of two miles (3 km), using water as the transmission medium. In December 1901, Guglielmo Marconi established wireless communication between St. John's, Newfoundland (Canada) and Poldhu, Cornwall (England), earning him the 1909 Nobel Prize in physics (which he shared with Karl Braun). However small-scale radio communication had already been demonstrated in 1893 by Nikola Tesla in a presentation to the National Electric Light Association.
    On 25 March 1925, John Logie Baird was able to demonstrate the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning 30 September 1929. However, for most of the twentieth century televisions depended upon the cathode ray tube invented by Karl Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on 7 September 1927.

    ReplyDelete
  15. This comment has been removed by the author.

    ReplyDelete
  16. ++++magnetism+++

    The ancient Greeks, originally those near the city of Magnesia, and also the early Chinese knew about strange and rare stones (possibly chunks of iron ore struck by lightning) with the power to attract iron. A steel needle stroked with such a "lodestone" became "magnetic" as well, and around 1000 the Chinese found that such a needle, when freely suspended, pointed north-south.

    The magnetic compass soon spread to Europe. Columbus used it when he crossed the Atlantic ocean, noting not only that the needle deviated slightly from exact north (as indicated by the stars) but also that the deviation changed during the voyage. Around 1600 William Gilbert, physician to Queen Elizabeth I of England, proposed an explanation: the Earth itself was a giant magnet, with its magnetic poles some distance away from its geographic ones (i.e. near the points defining the axis around which the Earth turns).

    The Magnetosphere
    On Earth one needs a sensitive needle to detect magnetic forces, and out in space they are usually much, much weaker. But beyond the dense atmosphere, such forces have a much bigger role, and a region exists around the Earth where they dominate the environment, a region known as the Earth's magnetosphere. That region contains a mix of electrically charged particles, and electric and magnetic phenomena rather than gravity determine its structure. We call it the Earth's magnetosphere

    Only a few of the phenomena observed on the ground come from the magnetosphere: fluctuations of the magnetic field known as magnetic storms and substorms, and the polar aurora or "northern lights," appearing in the night skies of places like Alaska and Norway. Satellites in space, however, sense much more: radiation belts, magnetic structures, fast streaming particles and processes which energize them. All these are described in the sections that follow.

    ReplyDelete
  17. •Computer networks and the Internet
    On 11 September 1940, George Stibitz was able to transmit problems using teletype to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer or mainframe with remote dumb terminals remained popular throughout the 1950s. However, it was not until the 1960s that researchers started to investigate packet switching — a technology that would allow chunks of data to be sent to different computers without first passing through a centralized mainframe. A four-node network emerged on 5 December 1969; this network would become ARPANET, which by 1981 would consist of 213 nodes.
    ARPANET's development centred around the Request for Comment process and on 7 April 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet and many of the protocols the Internet relies upon today were specified through the Request for Comment process. In September 1981, RFC 791 introduced the Internet Protocol v4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today.
    However, not all important developments were made through the Request for Comment process. Two popular link protocols for local area networks (LANs) also appeared in the 1970s. A patent for the token ring protocol was filed by Olof Soderblom on 29 October 1974 and a paper on the Ethernet protocol was published by Robert Metcalfe and David Boggs in the July 1976 issue of Communications of the ACM.



    --jude:)

    ReplyDelete
  18. But what is magnetism?

    Until 1821, only one kind of magnetism was known, the one produced by iron magnets. Then a Danish scientist, Hans Christian Oersted, while demonstrating to friends the flow of an electric current in a wire, noticed that the current caused a nearby compass needle to move. The new phenomenon was studied in France by Andre-Marie Ampere, who concluded that the nature of magnetism was quite different from what everyone had believed. It was basically a force between electric currents: two parallel currents in the same direction attract, in oposite directions repel. Iron magnets are a very special case, which Ampere was also able to explain.
    In nature, magnetic fields are produced in the rarefied gas of space, in the glowing heat of sunspots and in the molten core of the Earth. Such magnetism must be produced by electric currents, but finding how those currents are produced remains a major challenge.
    Magnetic Field Lines
    Michael Faraday, credited with fundamental discoveries on electricity and magnetism (an electric unit is named "Farad" in his honor), also proposed a widely used method for visualizing magnetic fields. Imagine a compass needle freely suspended in three dimensions, near a magnet or an electrical current. We can trace in space (in our imagination, at least!) the lines one obtains when one "follows the direction of the compass needle." Faraday called them lines of force, but the term field lines is now in common use.

    Electromagnetic Waves
    Faraday not only viewed the space around a magnet as filled with field lines, but also developed an intuitive (and perhaps mystical) notion that such space was itself modified, even if it was a complete vacuum. His younger contemporary, the great Scottish physicist James Clerk Maxwell, placed this notion on a firm mathematical footing, including in it electrical forces as well as magnetic ones. Such a modified space is now known as an electromagnetic field.

    Today electromagnetic fields (and other types of field as well) are a cornerstone of physics. Their basic equations, derived by Maxwell, suggested that they could undergo wave motion, spreading with the speed of light, and Maxwell correctly guessed that this actually was light and that light was in fact an electromagnetic wave.

    Heinrich Hertz in Germany, soon afterwards, produced such waves by electrical means, in the first laboratory demonstration of radio waves. Nowadays a wide variety of such waves is known, from radio (very long waves, relatively low frequency) to microwaves, infra-red, visible light, ultra-violet, x-rays and gamma rays (very short waves, extremely high frequency).

    Radio waves produced in our magnetosphere are often modified by their environment and tell us about the particles trapped there. Other such waves have been detected from the magnetospheres of distant planets, the Sun and the distant universe. X-rays, too, are observed to come from such sources and are the signatures of high-energy electrons there.

    ReplyDelete
  19. Thermodynamics is the field of physics that deals with the relationship between heat and other properties such as pressure,density,temperature etc. in a substance.
    Specifically, thermodynamics focuses largely on how a heat transfer is related to various energy changes within a physical system undergoing a thermodynamic process.
    Such processes usually result in work being done by the system and are guided by the laws of thermodynamics.
    Thermodynamic Processes:
    A system undergoes a thermodynamic process when there is some sort of energetic change within the system, generally associated with changes in pressure, volume, internal energy (i.e. temperature), or any sort of heat transfer.

    There are several specific types of thermodynamic processes that have special properties:

    * Adiabatic process - a process with no heat transfer into or out of the system.
    * Isochoric process - a process with no change in volume, in which case the system does no work.
    * Isobaric process - a process with no change in pressure.
    * Isothermal process - a process with no change in temperature.

    States of Matter:
    The 5 states of matter

    * gas
    * liquid
    * solid
    * plasma
    * superfluid (such as a Bose-Einstein Condensate)

    Phase Transitions

    * condensation - gas to liquid
    * freezing - liquid to solid
    * melting - solid to liquid
    * sublimation - solid to gas
    * vaporization - liquid or solid to gas

    Heat Capacity:
    The heat capacity, C, of an object is the ratio of change in heat (energy change - denoted by delta-Q) to change in temperature (delta-T).

    C = delta-Q / delta-T

    The heat capacity of a substance indicates the ease with which a substance heats up. A good thermal conductor would have a low heat capacity, indicating that a small amount of energy causes a large temperature change. A good thermal insulator would have a large heat capacity, indicating that much energy transfer is needed for a temperature change.
    Ideal Gas Equations:
    There are various ideal gas equations which relate temperature (T1), pressure (P1), and volume (V1). These values after a thermodynamic change is indicated by (T2), (P2), and (V2). For a given amount of a substance, n (measured in moles), the following relationships hold:

    Boyle's Law (T is constant):
    P1V1 = P2V2

    Charles/Gay-Lussac Law (P is constant):
    V1/T1 = V2/T2

    Ideal Gas Law:
    P1V1/T1 = P2V2/T2 = nR

    R is the ideal gas constant, R = 8.3145 J/mol*K. For a given amount of matter, therefore, nR is constant, which gives the Ideal Gas Law.
    Laws of Thermodynamics:

    * Zeroeth Law of Thermodynamics - Two systems each in thermal equilibrium with a third system are in thermal equilibrium to each other.
    * First Law of Thermodynamics - The change in the energy of a system is the amount of energy added to the system minus the energy spent doing work.
    * Second Law of Thermodynamics - It is impossible for a process to have as its sole result the transfer of heat from a cooler body to a hotter one.
    * Third Law of Thermodynamics - It is impossible to reduce any system to absolute zero in a finite series of operations. This means that a perfectly efficient heat engine cannot be created.

    The Second Law & Entropy:
    The Second Law of Thermodynamics can be restated to talk about entropy, which is a quantitative measurement of the disorder in a system. The change in heat divided by the absolute temperature is the entropy change of the process. Defined this way, the Second Law can be restated as:

    In any closed system, the entropy of the system will either remain constant or increase.

    By "closed system" it means that every part of the process is included when calculating the entropy of the system.

    ReplyDelete
  20. topic:Energy

    •In physics, energy (from the Greek ἐνέργεια - energeia, "activity, operation", from ἐνεργός - energos, "active, working") is a scalar physical quantity that describes the amount of work that can be performed by a force, an attribute of objects and systems that is subject to a conservation law. Different forms of energy include kinetic, potential, thermal, gravitational, sound, light, elastic, and electromagnetic energy. The forms of energy are often named after a related force.
    •Any form of energy can be transformed into another form, e.g., from potential to thermal, and dissipated into the atmosphere. When oil, which contains high-energy bonds, is burned, the useful potential energy in the oil is converted into thermal energy, which can no longer be used to perform work (e.g., power a machine) and is lost. Although the thermal energy may not be a useful form of energy, the total energy has remained the same. The total energy always remains the same whenever energy changes from one form to another, even if the energy loses its ability to be used for performing work. This principle, the conservation of energy, was first postulated in the early 19th century, and applies to any isolated system. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.

    -Sources of Energy-

    1.Solar Energy
    2.Geothermal Energy
    3.Hydroelectric Energy
    4.wind Energy
    5.chemical Energy
    6.nuclear Energy

    2 kinds of energy

    1.Potential Energy

    •Potential energy is energy stored within a physical system as a result of the position or configuration of the different parts of that system. It has the potential to be converted into other forms of energy, such as kinetic energy, and to do work in the process. The SI unit of measure for energy (including potential energy) and work is the joule (symbol J).
    •The term "potential energy" was coined by the 19th century Scottish engineer and physicist William Rankine

    2.Kinetic Energy

    • the kinetic energy of an object is the extra energy which it possesses due to its motion. It is defined as the work needed to accelerate a body of a given mass from rest to its current velocity. Having gained this energy during its acceleration, the body maintains this kinetic energy unless its speed changes. Negative work of the same magnitude would be required to return the body to a state of rest from that velocity.
    •The kinetic energy of a single object is completely frame-dependent (relative). For example, a bullet racing by a non-moving observer has kinetic energy in the reference frame of this observer, but the same bullet has zero kinetic energy in the reference frame which moves with the bullet. The kinetic energy of systems of objects, however, may sometimes not be completely removable by simple choice of reference frame.

    ReplyDelete
  21. Topic:boyle’ Law

    •Boyle's law (sometimes referred to as the Boyle-Mariotte law) is one of several gas laws and a special case of the ideal gas law. Boyle's law describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system.The law was named after chemist and physicist Robert Boyle, who published the original law in 1662.The law itself can be stated as follows:
    •For a fixed amount of an ideal gas kept at a fixed temperature, P [pressure] and V [volume] are inversely proportional (while one increases, the other decreases).
    Equation
    The mathematical equation of boyle’ Law:

    pV=k

    where:
    •p denotes the pressure of the system.
    •V is the volume of the gas.
    •k is a constant value representative of the pressure and volume of the system.

    •So long as temperature remains constant the same amount of energy given to the system persists throughout its operation and therefore, theoretically, the value of k will remain constant. However, due to the derivation of pressure as perpendicular applied force and the probabilistic likelihood of collisions with other particles through collision theory, the application of force to a surface may not be infinitely constant for such values of k, but will have a limit when differentiating such values over a given time.
    •Forcing the volume V of the fixed quantity of gas to increase, keeping the gas at the initially measured temperature, the pressure p must decrease proportionally. Conversely, reducing the volume of the gas increases the pressure.
    •Boyle's law is used to predict the result of introducing a change, in volume and pressure only, to the initial state of a fixed quantity of gas. The before and after volumes and pressures of the fixed amount of gas, where the before and after temperatures are the same (heating or cooling will be required to meet this condition), are related by the equation:

    • p_1 V_1 = p_2 V_2. \,

    •Boyle's law, Charles's law, and Gay-Lussac's law form the combined gas law. The three gas laws in combination with Avogadro's law can be generalized by the ideal gas law.

    ReplyDelete
  22. Topic: MOMENTUM

    P=mv

    -is the product of the mass and velocity of an object (p = mv).
    -Objects in motion are said to have a momentum. This momentum is a vector. It has a size and a direction. The size of the momentum is equal to the mass of the object multiplied by the size of the object's velocity. The direction of the momentum is the same as the direction of the object's velocity.
    -Momentum is a conserved quantity in physics. This means that if you have several objects in a system, perhaps interacting with each other, but not being influenced by forces from outside of the system, then the total momentum of the system does not change over time. However, the separate momenta of each object within the system may change. One object might change momentum, say losing some momentum, as another object changes momentum in an opposite manner, picking up the momentum that was lost by the first.
    - Momentum is a conserved quantity, meaning that the total momentum of any closed system (one not affected by external forces) cannot change. Although originally seen to be due to Newton's laws, this law is also true in special relativity, and with appropriate definitions a momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory, and general relativity.
    -change in momentum is called IMPULSE.
    -J=Ft
    -increasing momentum.
    -decreasing momentum over a long period of time.
    -decreasing momentum over a short period of time.
    Example:
    Problem #1
    Lolito which is 40kg. is moving 5m/s. Zaldy is 30kg. how fast he should run in order to have the same momentum with lolito.?
    SOLUTION:
    m=40kg v=5m/s
    P=mv v
    P= 40kg{5m/s}
    P=200kgm/s
    v=m/P
    v=200kg m/s divided by 30kg
    v=6.67m/s

    ReplyDelete
  23. HONEY GRACE VITUG

    VOLTAGE

    is commonly used as a short name for electriacal potential difference. Its corresponding SI unit is the volt (V. Electrical potential is a hypothetically measurable physical dimension, and is denoted by algebraic variable V.


    The voltage between two (electron)positions. A and B inside a solid electrical conductor (or inside two electrical connected,solid electrical conductors) is denoted by Va-Vb. This voltage is electrical driving force that drives a conventional electric current in the direction A to B. Voltage can be directly measured by voltmeter.

    Precise modern and historic definitions of voltage exist,but due to the development of electron theory of metal conduction in the period 1897-1933. and to developments theoretical surface science from about 1910 to about 1950, particularly the theory of local work function some older definitions are no longer regarded as strictly correct. This is because they neglect the existence of chemical effect and surface effects.


    In conduction processes occurring in metals and other solids,electric current consist almost exclusively of the flow of electrons.The movement of electrons is controlled by differences in the so called total local thermodynamic potential often denoted by the symbol (mu). This parameter is often called the local fermi level or sometimes the local electrochemical potential of an electron or the total chemical potential electron.



    Voltage difference between two points corresponds to the water pressure difference between two points. If there is a water difference pressure two points then water flow
    then water due to the pump. from the first point to the second will be able to do work such as, driving a turbine.





    voltage is additive in the following sense: the voltage between A and C is the sum of the voltage between A and B. and the voltage between B and C.




    When talking about alternating current (AC) there is difference between instantaneous voltage and average voltage. Instantaneous voltages can be added to direct current (DC, but average voltages can be meaningfully added only when they apply to signals that all have frequency and phase.

    ReplyDelete
  24. Maricel Segovia

    Snell's law
    Relationship between the path taken by a ray of light as it moves from one medium to another and the refractive indices of the two media. Discovered in 1621 by Willebrord Snell (1580 – 1626), the law went unpublished until its mention by Christiaan Huygens. If n1 and n2 represent the indices of refraction of two media, and 1 and 2 are the angles of incidence and refraction that a ray of light makes with the line perpendicular to the boundary (the normal), Snell's law states that n1/n2 = sin 2/sin 1. Because the ratio n1/n2 is a constant for any given wavelength of light, the ratio of the two sines is also a constant for any angle.

    1. Snell's Law
    Key Concepts
    Refraction is the bending of light that takes place at a boundary between two materials having different indices of refraction. Refraction is due to a change in the speed of light as it passes from one medium to another.
    The boundary is the region where one medium meets another medium
    At a boundary, an incident ray can undergo partial reflection or, in certain situations, total internal reflection.
    No bending of the incident ray occurs if it strikes the boundary along the normal.
    The incident ray is the ray approaching the boundary. It strikes the boundary at the point of incidence. The refracted ray is the ray leaving the boundary through the second medium.
    The reflected ray is the ray undergoing partial (or total) reflection at the boundary. The normal is a construction line drawn perpendicular to the boundary at the point of incidence.
    The angle of incidence (i) is the angle between the incident ray and the normal. The angle of reflection (r) is the angle between the normal and the reflected ray.
    The angle of refraction (R) is the angle between the normal and the refracted ray.

    Both Reflection and Refraction occur when the light is incident on a more refractive medium.
    Some texts use the symbol r for the angle of refraction. The use of the same symbol to represent both the angle of reflection and the angle of refraction can be very confusing and should be avoided.
    Laws of Refraction:
    1. The ratio of sines of the angles of incidence and refraction is a constant. (Snell's Law) (The ratio is constant for a particular wavelength and a particular set of materials.)
    2. The incident and refracted rays are on opposite sides of the normal at the point of incidence.
    3. The incident ray, the normal, and the refracted ray are coplanar.
    Snell's Law: where n is a constant.
    the constant is the ratio of the speeds of light in the two media.)
    General form: or, n1sin 1 = n2sin 2

    ReplyDelete
  25. Lady Jelyn Barros(",)

    Band Structure

    In solid-state physics, the electronic band structure (or simply band structure) of a solid describes ranges of energy that an electron is "forbidden" or "allowed" to have. It is due to the diffraction of the quantum mechanical electron waves in the periodic crystal lattice. The band structure of a material determines several characteristics, in particular its electronic and optical properties.
    Why bands occur in materials?
    The electrons of a single isolated atom occupy atomic orbitals, which form a discrete set of energy levels. If several atoms are brought together into a molecule, their atomic orbitals split, as in a coupled oscillation. This produces a number of molecular orbitals proportional to the number of atoms. When a large number of atoms are brought together to form a solid, the number of orbitals becomes exceedingly large, and the difference in energy between them becomes very small, so the levels may be considered to form continuous bands of energy rather than the discrete energy levels of the atoms in isolation. However, some intervals of energy contain no orbitals, no matter how many atoms are aggregated, forming band gaps.
    Within an energy band, energy levels are so numerous as to be a near continuum. First, the separation between energy levels in a solid is comparable with the energy that electrons constantly exchange with phonons (atomic vibrations). Second, it is comparable with the energy uncertainty due to the Heisenberg uncertainty principle, for reasonably long intervals of time. As a result, the separation between energy levels is of no consequence.
    The lowermost, almost fully occupied band in an insulator or semiconductor, is called the valence band by analogy with the valence electrons of individual atoms. The uppermost, almost unoccupied band is called the conduction band because only when electrons are excited to the conduction band can current flow in these materials. The difference between insulators and semiconductors is only that the forbidden band gap between the valence band and conduction band is larger in an insulator, so that fewer electrons are found there and the electrical conductivity is lower. Because one of the main mechanisms for electrons to be excited to the conduction band is due to thermal energy, the conductivity of semiconductors is strongly dependent on the temperature of the material.
    This band gap is one of the most useful aspects of the band structure, as it strongly influences the electrical and optical properties of the material. Electrons can transfer from one band to the other by means of carrier generation and recombination processes. The band gap and defect states created in the band gap by doping can be used to create semiconductor devices such as solar cells, diodes, transistors, laser diodes, and others.
    Band structure of crystals
    Because electron momentum is the reciprocal of space, the dispersion relation between the energy and momentum of electrons can best be described in reciprocal space. It turns out that for crystalline structures, the dispersion relation of the electrons is periodic, and that the Brillouin zone is the smallest repeating space within this periodic structure. For an infinitely large crystal, if the dispersion relation for an electron is defined throughout the Brillouin zone, then it is defined throughout the entire reciprocal space.

    ReplyDelete
  26. Topic: G – Force (gravitational Force)

    The g-force experienced by an object is its acceleration relative to free-fall. It is termed "force" because such proper accelerations cannot be produced by gravity, but instead must result from other types of forces which usually cause stresses and strains on objects which make these sorts of forces more notable. Because of these strains, the types of accelerations discussed in terms of "g-forces," and measured by accelerometers, may be destructive to objects and organisms.
    The exception of gravitational acceleration from "g-force" accelerations applies to standard gravitational acceleration. The upward "1 g-force" which is "felt" by an object sitting on the Earth's surface is not due to gravity per second, but instead is caused by the stress of the mechanical force exerted in the upward direction by supporting materials (such as the ground) which must act to keep the object from going into free-fall. An object on the Earth's surface is accelerating relative to "free-fall," which is the path of an object falling toward the Earth's center. Objects allowed to free-fall, even under the influence of gravity, feel no "g-force," as demonstrated by the "zero-g" conditions in spacecraft in Earth orbit (or within a hypothetical elevator allowed to free-fall toward the center of the Earth, in vacuum). The unit of measure used is the g- the acceleration due to gravity at the Earth's surface and it can be written g, g, or G.] The unit g is not one of the SI which uses "g" for gram, and "G" could be confused with the standard symbol for the gravitational constant, but they are both distinct. The SI unit of acceleration is m/s2. However, objects experiencing g-forces are not necessarily changing velocity or position, so standard units of acceleration, which do not necessarily require stress in free fall, are by convention not used to express g-force.

    ReplyDelete
  27.  The g-force acting on a stationary object resting on the Earth's surface is 1 g (upwards) and results from the resisting reaction of the Earth's surface bearing upwards equal to an acceleration of 1 g, and is equal and opposite to standard gravity, defined as 9.80665 m/s², or equivalently 9.80665 Newton’s of force per kilogram of mass.
     The g-force acting on an object in any weightless environment such as free-fall in a vacuum is 0 g.
     The g-force acting on an object under acceleration can be much greater than 1 g, for example, the dragster pictured right can exert a horizontal g-force of 5.3 when accelerating.
     The g-force acting on an object under acceleration downwards can be negative, for example when cresting a sharp hill on a roller coaster.
    Measurement of g-force is typically achieved using an accelerometer. In certain cases g-forces may be measured using suitably calibrated scales.
    The gravitational force between two bodies of masses m and M offset by a vector distance r is given by

    F= GMmr / r ³ = GMm / r²


    where G is the gravitational constant. Like the electrostatic force, it is an inverse square law. Newton showed that the gravitational force on a body of mass m from a spherically symmetric body of total mass M whose center is located a distance r away is equivalent to the force on m from a point mass M located a distance r away. This nontrivial fact can be proved using calculus, and was one of the highlights of Newton's Principia Mathematica. Newton also showed that the gravitational force felt by a test particle at some point inside a spherically symmetric body (i.e., r < R, where r is the distance from the center of mass and R is the radius of the spherically symmetric body) is equivalent to the that due to a point mass at the center of mass with mass M(r), where M(r) is the total mass contained inside radius r.

    Horizontal axis g-force

    The human body is better at surviving g-forces that are perpendicular to the spine. In general when the acceleration is forwards, so that the g-force pushes the body backwards (colloquially known as "eyeballs in) a much higher tolerance is shown than when the acceleration is backwards, and the g-force is pushing the body forwards ("eyeballs out") since blood vessels in the retina appear more sensitive in the latter direction.
    Early experiments showed that untrained humans were able to tolerate 17 g eyeballs-in (compared to 12 g eyeballs-out) for several minutes without loss of consciousness or apparent long-term harm. The record for peak experimental horizontal g-force tolerance is held by acceleration pioneer John Stapp, in a series of rocket sled deceleration experiments in which he survived forces up to 46.2 times the force of gravity for less than a second. Stapp suffered lifelong damage to his vision from this test.

    Gennyvel J.Gonzaga

    ReplyDelete
  28. JOHN ROBERT ESTRADA:

    Electromagnetism:

    Electromagnetism is the physics of the electromagnetic field, a field that exerts a force on charged particles and is reciprocally affected by the presence and motion of such particles.
    A changing magnetic field produces an electric field (this is the phenomenon of electromagnetic induction, the basis of operation for electrical generators, induction motors, and transformers). Similarly, a changing electric field generates a magnetic field.
    The magnetic field is produced by the motion of electric charges, i.e., electric current. The magnetic field causes the magnetic force associated with magnets.
    The theoretical implications of electromagnetism led to the development of special relativity by Albert Einstein in 1905; and from this it was shown that magnetic fields and electric fields are convertible with relative motion as a four vector.

    The force that the electromagnetic field exerts on electrically charged particles, called the electromagnetic force, is one of the fundamental forces. The other fundamental forces are strong nuclear force (which holds atomic nuclei together), the weak nuclear force and the gravitational force. All other forces (e.g. friction) are ultimately derived from these fundamental forces.
    The electromagnetic force is the one responsible for practically all the phenomena encountered in daily life, with the exception of gravity. All the forces involved in interactions between atoms can be traced to the electromagnetic force acting on the electrically charged protons and electrons inside the atoms. This includes the forces we experience in "pushing" or "pulling" ordinary material objects, which come from the intermolecular forces between the individual molecules in our bodies and those in the objects. It also includes all forms of chemical phenomena, which arise from interactions between electron orbitals.

    -->ROBERT

    ReplyDelete
  29. ~PRESSURE~

    Pressure is the force on an object that is spread over a surface area. The equation for pressure is P = F/A. Pressure can be measured for a solid is pushing on a solid, but the case of a solid pushing on a liquid or gas requires that the fluid be confined in a container. The force can also be created by the weight of an object.

    Pressure is the force on an object that is spread over a surface area. The equation for pressure is the force divided by the area where the force is applied. Although this measurement is straightforward when a solid is pushing on a solid, the case of a solid pushing on a liquid or gas requires that the fluid be confined in a container. The force can also be created by the weight of an object.

    -Pressure of solid on a solid-

    When you apply a force to a solid object, the pressure is defined as the force applied divided by the area of application. The equation for pressure is P = F/A
    where:
    • P is the pressure
    • F is the applied force
    • A is the surface area where the force is applied
    • F/A is F divided by A

    Example:

    If you push on an object with your hand with a force of 20 pounds, and the area of your hand is 10 square inches, then the pressure you are exerting is 20 / 10 = 2 pounds per square inch.

    Pressure equals Force divided by Area(P=F/A)

    We can see that for a given force, if the surface area is smaller, the pressure will be greater. If you use a larger area, you are spreading out the force, and the pressure (or force per unit area) becomes smaller.

    -Solid pressing on confined fluid-

    When a liquid or gas is confined in a container or cylinder, you can create a pressure by applying a force with a solid piston. The pressure created in the cylinder equals the force applied divided by the area of the piston: P = F/A.
    In a confined fluid—neglecting the effect of gravity on the fluid—the pressure is the same throughout the container, pressing equally on all the walls. In the case of a bicycle pump, the pressure created inside the pump will be transmitted through the hose into the bicycle tire. But the air is still all confined.

    Pressure is in all directions in a fluid

    Increasing the force will increase the pressure inside the cylinder.

    -Caused by gravity-

    Since the weight of an object is a force caused by gravity, we can substitute weight in the pressure equation. Thus the pressure (P) caused by the weight (W) of an object is that weight divided by the area (A) where the weight is applied.
    P = W/A
    If you place a solid object on the floor, the pressure on the floor over the area of contact is the weight of the object divided by the area on the floor.

    Instruments used in measuring pressure:

    * Barometer- is an instrument that measure atmospheric pressure.
    * Manometer- another instrument that measures air pressure.
    * Sphygmomanometer- an instrument used in measuring blood pressure.


    :-"Jakki Mae L. Pineda:-"

    ReplyDelete
  30. Ariel L. Patiño said ...

    Pascal's Principle

    Pascal's principle states that a pressure applied to an enclosed fluid is transmitted everywhere in the fluid. Hence, if a pressure is applied to one side of an enclosed fluid, all the other walls containing the fluid feel the same pressure. The pressure is transmitted without being diminished.

    In physics the term fluid refers to either a liquid or a gas. If a pressure is applied to a compressible gas, Pascal's principle still applies, but the volume of the gas will change. For Pascal's principle to be useful to hydraulics, the fluid should be an incompressible liquid, which will transmit the applied pressure without changing its volume.

    To understand how Pascal's principle applies to hydraulics imagine an enclosed fluid as in the figure. The enclosure has two movable pistons. Now imagine that one of the pistons has a small cross sectional area and the other has a large cross sectional area. If a force pushes the smaller piston into the fluid, this force results in a pressure that is transmitted to the larger piston.

    The pressure on both the small and large pistons is the same according to Pascal's principle. Because the total force equals the pressure multiplied by the area the total force on the larger piston will be greater than the total force on the smaller piston. In a hydraulic device, the larger piston multiplies the force exerted on the smaller piston.

    For example, if the pistons are both circular, the smaller piston has a 1 inch radius, and the larger piston has a 10 inch radius, then the area of the larger piston is 100 times as large as that of the smaller piston. (The area of a circle equals pi times the radius squared.) Hence this larger piston multiplies any force applied to the smaller piston by 100 times.

    Thus a hydraulic jack with pistons of these dimensions can use a 20 pound force to lift a 2000 pound car. However nothing is free. Energy must be conserved. The 20 pound force will have to push the smaller piston 100 times the distance the mechanic wants to lift the 2000 pound car. Therefore hydraulic devices might by design multiply the force by smaller amounts to reduce the distance traveled by the smaller piston.

    Hydraulics is just one of many applications of the fundamental physical concept of pressure.

    ReplyDelete
  31. Refraction is the change in direction of a wave due to a change in its speed. This is most commonly observed when a wave passes from one medium to another at an angle. Refraction of light is the most commonly observed phenomenon, but any type of wave can refract when it interacts with a medium, for example when sound waves pass from one medium into another or when water waves move into water of a different depth. Refraction is described by Snell's law, which states that the angle of incidence θ1 is related to the angle of refraction θ2 by

    \frac{\sin\theta_1}{\sin\theta_2} = \frac{v_1}{v_2} = \frac{n_2}{n_1}

    where v1 and v2 are the wave velocities in the respective media, and n1 and n2 the refractive indices. In general, the incident wave is partially refracted and partially reflected; the details of this behavior are described by the Fresnel equations.In optics, refraction occurs when light waves travel from a medium with a given refractive index to a medium with another at an angle. At the boundary between the media, the wave's phase velocity is altered, usually causing a change in direction. Its wavelength increases or decreases but its frequency remains constant. For example, a light ray will refract as it enters and leaves glass, assuming there is a change in refractive index. A ray traveling along the normal (perpendicular to the boundary) will change speed, but not direction. Refraction still occurs in this case. Understanding of this concept led to the invention of lenses and the refracting telescope. Refraction can be seen when looking into a bowl of water. Air has a refractive index of about 1.0003, and water has a refractive index of about 1.33. If a person looks at a straight object, such as a pencil or straw, which is placed at a slant, partially in the water, the object appears to bend at the water's surface. This is due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight (shown as dashed lines) intersect at a higher position than where the actual rays originated. This causes the pencil to appear higher and the water to appear shallower than it really is. The depth that the water appears to be when viewed from above is known as the apparent depth. This is an important consideration for spearfishing from the surface because it will make the target fish appear to be in a different place, and the fisher must aim lower to catch the fish.Ripples travel from the left and pass over a shallower region inclined at an angle to the wavefront. The waves travel more slowly in the shallower water, so the wavelength decreases and the wave bends at the boundary. The dotted line represents the normal to the boundary. The dashed line represents the original direction of the waves. This phenomenon explains why waves on a shoreline tend to strike the shore close to a perpendicular angle. As the waves travel from deep water into shallower water near the shore, they are refracted from their original direction of travel to an angle more normal to the shoreline. Refraction is also responsible for rainbows and for the splitting of white light into a rainbow-spectrum as it passes through a glass prism. Glass has a higher refractive index than air. When a beam of white light passes from air into a material having an index of refraction that varies with frequency, a phenomenon known as dispersion occurs, in which different coloured components of the white light are refracted at different angles, i.e., they bend by different amounts at the interface, so that they become separated. The different colors correspond to different frequencies.

    While refraction allows for beautiful phenomena such as rainbows, it may also produce peculiar optical phenomena, such as mirages and Fata Morgana. These are caused by the change of the refractive index of air with temperature.

    ReplyDelete
  32. sir;


    my work...




    by:weNdieEe RaymundoOo.

    ReplyDelete
  33. Friction is the force resisting the relative lateral (tangential) motion of solid surfaces, fluid layers, or material elements in contact. It is usually subdivided into several varieties:

    * Dry friction resists relative lateral motion of two solid surfaces in contact. Dry friction is also subdivided into static friction between non-moving surfaces, and kinetic friction (sometimes called sliding friction or dynamic friction) between moving surfaces.

    * Lubricated friction or fluid friction resists relative lateral motion of two solid surfaces separated by a layer of gas or liquid.

    * Fluid friction is also used to describe the friction between layers within a fluid that are moving relative to each other.

    * Skin friction is a component of drag, the force resisting the motion of a solid body through a fluid.

    * Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation.

    Friction is not a fundamental force, as it is derived from electromagnetic force between charged particles, including electrons, protons, atoms, and molecules, and so cannot be calculated from first principles, but instead must be found empirically. When contacting surfaces move relative to each other, the friction between the two surfaces converts kinetic energy into thermal energy, or heat. Contrary to earlier explanations, kinetic friction is now understood not to be caused by surface roughness but by chemical bonding between the surfaces. Surface roughness and contact area, however, do affect kinetic friction for micro- and nano-scale objects where surface area forces dominate inertial forces.

    ReplyDelete
  34. Terminal Velocity

    In fluid dynamics an object is moving at its terminal velocity if its speed is constant due to the restraining force exerted by the air, water or other fluid through which it is moving.

    A free-falling object achieves its terminal velocity when the downward force of gravity equals the upward force of drag. This causes the net force on the object to be zero, resulting in an acceleration of zero.

    As the object accelerates (usually downwards due to gravity), the drag force acting on the object increases, causing the acceleration to decrease. At a particular speed, the drag force produced will equal the object's weight (mg). At this point the object ceases to accelerate altogether and continues falling at a constant speed called terminal velocity (also called settling velocity). Terminal velocity varies directly with the ratio of weight to drag. More drag means a lower terminal velocity, while increased weight means a higher terminal velocity. An object moving downward with greater than terminal velocity (for example because it was affected by a downward force or it fell from a thinner part of the atmosphere or it changed shape) will slow until it reaches terminal velocity.

    *Examples

    Based on wind resistance, for example, the terminal velocity of a skydiver in a free-fall position with a semi-closed parachute is about 195 km/h (120 mph or 55 m/s). This velocity is the asymptotic limiting value of the acceleration process, because the effective forces on the body balance each other more and more closely as the terminal velocity is approached. In this example, a speed of 50% of terminal velocity is reached after only about 3 seconds, while it takes 8 seconds to reach 90%, 15 seconds to reach 99% and so on. Higher speeds can be attained if the skydiver pulls in his or her limbs. In this case, the terminal velocity increases to about 320 km/h (200 mph or 90 m/s), which is also the terminal velocity of the peregrine falcon diving down on its prey. And the same terminal velocity is reached for a typical 150 grain bullet travelling in the downward vertical direction — when it is returning to earth having been fired upwards, or perhaps just dropped from a tower — according to a 1920 U.S. Army Ordnance study.


    An object falling toward the surface of the Earth will fall 9.81 meters per second faster every second (an acceleration of 9.81 m/s² or 32.18 ft/s²). The reason an object reaches a terminal velocity is that the drag force resisting motion is approximately proportional to the square of its speed. At low speeds, the drag is much less than the gravitational force and so the object accelerates. As it accelerates, the drag increases, until it equals the weight. Drag also depends on the projected area. This is why things with a large projected area, such as parachutes, have a lower terminal velocity than small objects such as bullets.

    Buoyancy effects, due to the upward force on the object by the surrounding fluid, can be taken into account using Archimedes' principle: the mass m has to be reduced by the displaced fluid mass , with the volume of the object.

    On Earth, the terminal velocity of an object changes due to the properties of the fluid, the mass of the object and its projected cross-sectional surface area.

    Air density increases with decreasing altitude, ca. 1% per 80 meters (262 ft). For objects falling through the atmosphere, for every 160 meters (525 ft) of falling, the terminal velocity decreases 1%. After reaching the local terminal velocity, while continuing the fall, speed decreases to change with the local terminal velocity.

    *Terminal velocity in the presence of buoyancy force

    When the buoyancy effects are taken into account, an object falling through a fluid under its own weight can reach a terminal velocity (settling velocity) if the net force acting on the object becomes zero. When the terminal velocity is reached the weight of the object is exactly balanced by the upward buoyancy force and drag force.



    ..cheskacamilletiangha:)

    ReplyDelete
  35. TOPIC:
    "CIRCULAR MOTION"

    circular motion
    *rotation along a circle: a circular path or a circular orbit.
    *It can be uniform, that is, with constant angular rate of rotation, or non-uniform, that is, with a changing rate of rotation.
    *The rotation around a fixed axis of a three-dimensional body involves circular motion of its parts. We can talk about circular motion of an object if we ignore its size, so that we have the motion of a point mass in a plane. For example, the center of mass of a body can undergo circular motion.
    Examples of circular motion are: an artificial satellite orbiting the Earth in geosynchronous orbit, a stone which is tied to a rope and is being swung in circles (cf. hammer throw), a racecar turning through a curve in a race track, an electron moving perpendicular to a uniform magnetic field, a gear turning inside a mechanism.
    Circular motion is accelerated even if the angular rate of rotation is constant, because the object's velocity vector is constantly changing direction. Such change in direction of velocity involves acceleration of the moving object by a centripetal force, which pulls the moving object towards the center of the circular orbit. Without this acceleration, the object would move in a straight line, according to Newton's laws of motion.
    Circular motion always involves a change in the direction of the velocity vector, but it is also possible for the magnitude of the velocity to change at the same time. Circular motion is referred to as uniform if |v| is constant, and non uniform if it is changing. Your speedometer tells you the magnitude of your car's velocity vector, so when you go around a curve while keeping your speedometer needle steady, you are executing uniform circular motion. If your speedometer reading is changing as you turn, your circular motion is non uniform. Uniform circular motion is simpler to analyze mathematically, so wewill attack it first and then pass to the nonuniform case.
    uniform circular motion — circular motion in which the
    magnitude of the velocity vector remains constant
    nonuniform circular motion — circular motion in which the
    magnitude of the velocity vector changes
    radial — parallel to the radius of a circle; the in-out direction
    tangential — tangent to the circle, perpendicular tothe radial direction

    _EHCHEN I CHOU_

    ReplyDelete
  36. Ohm's law states that the current through a conductor between two points is directly proportional to the potential difference or voltage across the two points, and inversely proportional to the resistance between them, provided that the temperature remains constant
    The mathematical equation that describes this relationship is

    where V is the potential difference measured across the resistance in units of volts; I is the current through the resistance in units of amperes and R is the resistance of the conductor in units of ohms. More specifically, Ohm's law states that the R in this relation is constant, independent of the current.[3]
    The law was named after the German physicist Georg Ohm, who, in a treatise published in 1827, described measurements of applied voltage and current through simple electrical circuits containing various lengths of wire. He presented a slightly more complex equation than the one above (see History section below) to explain his experimental results. The above equation is the modern form of Ohm's law.

    lurie mae oraq
    In physics, the term Ohm's law is also used to refer to various generalizations of the law originally formulated by Ohm. The simplest example of this is:

    where J is the current density at a given location in a resistive material, E is the electric field at that location, and σ is a material dependent parameter called the conductivity. This reformulation of Ohm's law is due to Gustav Kirchhoff
    Ohm's Law defines the relationships between (P) power, (E) voltage, (I) current, and (R) resistance. One ohm is the resistance value through which one volt will maintain a current of one ampere.

    ( I ) Current is what flows on a wire or conductor like water flowing down a river. Current flows from negative to positive on the surface of a conductor. Current is measured in (A) amperes or amps.

    ( E ) Voltage is the difference in electrical potential between two points in a circuit. It's the push or pressure behind current flow through a circuit, and is measured in (V) volts.

    ( R ) Resistance determines how much current will flow through a component. Resistors are used to control voltage and current levels. A very high resistance allows a small amount of current to flow. A very low resistance allows a large amount of current to flow. Resistance is measured in ohms.

    ( P ) Power is the amount of current times the voltage level at a given point measured in wattage or watts.

    ReplyDelete
  37. "Variation" is a generic word in astronomy as in other fields: many observable astronomical quantities can show variations of some kind. But the term variation also has a specific meaning in astronomy, referring to the variation of the Moon. This is the name given to one of the principal perturbations in the motion of the Moon.
    The variation was discovered by Tycho Brahe, who first noticed, starting from a lunar eclipse in December 1590, that at the times of syzygy (new or full moon), the apparent velocity of motion of the Moon (along its orbit as seen against the background of stars) was faster than expected. On the other hand, at the times of first and last quarter, its velocity was correspondingly slower than expected. (Those expectations were based on the lunar tables widely used up to Tycho's time. They took some account of the two largest irregularities in the Moon's motion, i.e. those now known as the equation of the center and the evection, see also Lunar theory - History.)

    Variational orbit: nearly an ellipse, with the Earth at the center. The diagram illustrates the perturbing effect of the Sun on the Moon's orbit, using some simplifying approximations, e.g. that in the absence of the Sun, the Moon's orbit would be circular with the Earth at its center
    In 1687 Newton published, in the 'Principia', his first steps in the gravitational analysis of the motion of three mutually-attracting bodies. This included a proof that the Variation is one of the results of the perturbation of the motion of the Moon caused by the action of the Sun, and that one of the effects is to distort the Moon's orbit in a practically elliptical manner (ignoring at this point the eccentricity of the Moon's orbit), with the centre of the ellipse occupied by the Earth, and the major axis perpendicular to a line drawn between the Earth and Sun.
    Newton expressed an approximate recognition that the real orbit of the Moon is not exactly an eccentric Keplerian ellipse, nor exactly a central ellipse due to the variation, but "an oval of another kind"Newton did not give an explicit expression for the form of this "oval of another kind"; to an approximation, it combines the two effects of the central-elliptical variational orbit and the Keplerian eccentric ellipse. Their combination also continually changes its shape as the annual argument changes, and also as the evection shows itself in libratory changes in the eccentricity, and in the direction, of the long axis of the eccentric ellipse.
    The Variation is the second-largest solar perturbation of the Moon's orbit after the Evection, and the third-largest inequality in the motion of the Moon altogether; (the first and largest of the lunar inequalities is the equation of the centre, a result of the eccentricity – which is not an effect of solar perturbation).

    ReplyDelete
  38. Ultraviolet Rays
    ultraviolet radiation (also known as UV radiation or ultraviolet rays) is a form of energy traveling through space.Some of the most frequently recognized types of energy are heat and light. These, along with others, can be classified as a phenomenon known as electromagnetic radiation. Other types of electromagnetic radiation are gamma rays, X-rays, visible light, infrared rays, and radio waves. The progression of electromagnetic radiation through space can be visualized in different ways. Some experiments suggest that these rays travel in the form of waves. A physicist can actually measure the length of those waves (simply called their wavelength ). It turns out that a smaller wavelength means more energy. At other times, it is more plausible to describe electromagnetic radiation as being contained and traveling in little packets, called photons.
    The distinguishing factor among the different types of electromagnetic radiation is their energy content. Ultraviolet radiation is more energetic than visible radiation and therefore has a shorter wavelength. To be more specific: Ultraviolet rays have a wavelength between approximately 100 nanometers and 400 nanometers whereas visible radiation includes wavelengths between 400 and 780 nanometers.

    The sun is a major source of ultraviolet rays. Though the sun emits all of the different kinds of electromagnetic radiation, 99% of its rays are in the form of visible light, ultraviolet rays, and infrared rays (also known as heat). Man-made lamps can also emit UV radiation, and are often used for experimental purposes.

    ReplyDelete
  39. This comment has been removed by the author.

    ReplyDelete
  40. Ultraviolet rays

    Ultraviolet rays can be subdivided into three different wavelength bands—UV-A, UV-B, and UV-C. This is simply a convenient way of classifying the rays based on the amount of energy they contain and their effects on biological matter. UV-C is most energetic and most harmful; UV-A is least energetic and least harmful.
    UV-C rays do not reach the earth’s surface because of the ozone layer. When UV-C rays meet the ozone molecules at high layers of the atmosphere, the energy inherent in them is enough to break apart the bond of the molecule and absorb the energy. Therefore, no UV-C rays from the sun ever come into contact with life on earth, though man-produced UV-C rays can be a hazard in certain professions, such as welders.
    UV-B rays have a lower energy level and a longer wavelength than UV-C. As their energy is often not sufficient to split an ozone molecule, some of them extend down to the earth's surface. UV-A rays do not have enough energy to break apart the bonds of the ozone, so UV-A radiation passes the earth's atmosphere almost unfiltered. As both UV-B and UV-A rays can be detrimental to our health, it is important that we protect ourselves. This can be done through a variety of ways. The most obvious is to reduce the amount of time one spends in the sun, particularly between the hours of 11 am and 3 pm, when the sun is at its highest in the sky.
    Other factors that have an influence on UV levels are the physical features of the land—sand, snow, and water all tend to reflect UV rays. This phenomenon is called albedo. Some of the ultraviolet rays reflected off the ground encounter scattering by air molecules, aerosols or clouds back down to the earth, thus increasing the total irradiance. When there is snow on the ground the amount of time it takes for sunburn to occur is therefore significantly reduced.
    Also, the closer one is to the equator, the more ultraviolet rays one is exposed to. This can be explained by the fact that the sun is usually higher at the sky at low latitudes. In addition, the ozone layer is thinner at the equator as it is over, for example the United States or Europe, and this also contributes to more UV.
    Since the 1980s, polar regions are affected by the ozone hole. Under the ozone hole, biologically relevant UV levels are 2-3 times as high as they were before.

    ReplyDelete
  41. Though Physics class is just my second favorite next to English, I'm really enjoying it, especially during activities.
    Mr. James Joule is the topic that I've chosen because he is very relevant to my life as his first name & surname's first letter is also J (Jenica Josue).

    James Prescott Joule (1818-1889), British physicist, born in Salford, Lancashire, England. One of the outstanding physicists of his day, Joule is best known for his research in electricity and thermodynamics. In the course of his investigations of the heat emitted in an electrical circuit, he formulated the law, now known as Joule's law*, of electric heating, which states that the amount of heat produced each second in a conductor by a current of electricity is proportional to the resistance of the conductor and to the square of the current. Joule experimentally verified the law of conservation of energy in his study of the transfer of mechanical energy into heat energy.
    Using many independent methods, Joule determined the numerical relation between heat and mechanical energy, or the mechanical equivalent of heat. The unit of energy called the joule is named after him; it is equal to 1 watt-second, or 10 million ergs, or about 0.000948 British thermal unit. Together with the physicist William Thomson (later Baron Kelvin), Joule found that the temperature of a gas falls when it expands without doing any work. This principle, which became known as the Joule-Thomson effect, underlies the operation of common refrigeration and air conditioning systems.


    *Q = I2Rt (where I is the current, R the resistance, and t the time),





    my apology,,.mine is a 216 word long topic., 16 words in excess...:(


    Hope I'll be the last to comment.
    By the way, it's Feb. 14, 2010 in the Philippines: Happy Valenzuela Day, Happy Valentine's Day and Happy Chinese New Year! God Bless...

    ReplyDelete
  42. Water shows unusual expansion. If we take a cube of ice at -5°C and heat it, it expands till ice starts melting. During melting its temperature remains 0°C but its volume decreases. If heat is continuously supplied to water at 0°C, it further contracts up to 4°C and then it starts expanding. Thus water has its minimum volume and maximum density at 4°C.

    The anomalous expansion of water helps preserve aquatic life during very cold weather. When temperature falls, the top layer of water in a pond contracts, becomes denser and sinks to the bottom. A circulation is thus set up until the entire water in the pond reaches its maximum density at 4°C. If the temperature falls further, the top layer expands and remains on the top till it freezes. Thus even though the upper layer are frozen the water near the bottom is at 4°C and the fishes etc. can survive in it easily.

    We know that most substances contract when they are cooled but Water Expands when cooled from 4°C to 0°C. This unusual Expansion of water is Anomalous expansion of water. This shows water's greater density at 4°C and lower density at 0°C. Hence ice floats on water, and the water at the bottom of a pool in winter is warmer than at the surface keeping aquatic life under water safe.
    To help you more,
    Here is a very detailed and to-the-point article on Anomalous Expansion Of Water for all the details and answers on your topic along with This short article.

    Normally, liquids contract on cooling & become more dense. However, water contracts when cooled to a temperature of 4oC and thereafter expands as it is cooled further from 4oC to 0oC. Water attains its maximum density at 4oC. This phenomenon is useful for the preservation of marine life in very cold temperatures. Initially, the surface water in water bodies starts cooling. Upon reaching the temperature of 4oC, the surface water descends to the bottom as it denser. Upon further cooling between 4-0 degrees C, a temperature gradient is set up in depths of the water body whereby, the bottom-most layer is at 4 deg C & the temperature gradually drops as one goes upwards. At 0 deg C, ice is formed. Ice being lighter than water, floats to the upper surface. Further, water and ice are bad conductors of heat. All this help in the maintenance of temperature of the water at the bottom at 4 deg C. It is in this layer that marine life is able to sustain itself.

    Anomalous expansion of water takes place because when water is heated to 277K hydrogen bonds are formed. Though ice is supposed to expand when it is converted into water, this gradual formation of hydrogen bonds causes it to contract, i.e. the contraction caused due to the formation of hydrogen bonds is greater than the actual expansion of ice. At 277K water has the maximum density because all the hydrogen bonds are formed by 277K beyond which water obeys the kinetic theory of molecules, an increase in volume when heated and the reverse when cooled. The same thing happens in the reverse when water is cooled beyond 277K.

    ReplyDelete
  43. ♥Solar Energy♥

    Energy from the sun can be used in many ways. Light is the most common way we take advantage of solar energy to light buildings via windows and skylights. Heat or thermal, such as passive solar room heating, solar hot water and pool heating are other applications of solar “thermal” energy. With photovoltaic or “PV” (also sometimes called “solar electric”) we harness solar energy by directly converting sunlight into electricity by use of semi-conductors. Even though all of these are considered solar energy systems, the equipment, technologies and costs are substantially different for each. This section will help to clarify the basics of these systems and how they may be deployed to meet your energy needs and carbon reduction efforts.Solar cells provide the energy to run satellites that orbit the Earth. These give us satellite TV, telephones, navigation, weather forecasting, the internet and all manner of other facilities.Solar Energy has been the power supply of choice for Industrial applications, where power is required at remote locations. This means in these applications that solar power is economic, without subsidy. Most systems in individual uses require a few kilowatts of power. Examples are powering repeater stations for microwave, TV and radio, telemetry and radio telephones.Solar energy is also frequently used on transportation signalling e.g. offshore navigation buoys, lighthouses, aircraft warning lights on pylons or structures, and increasingly in road traffic warning signals. Solar is used to power environmental and situation monitoring equipment and corrosion protection systems (based on impressing a current) for pipelines, well-heads, and bridges or other structures. As before, for larger electrical loads it can be cost effective to configure a hybrid power system that links the PV with a small diesel generator. Solar's great benefit here is that it is highly reliable and requires little maintenance so it's ideal in places that are hard to get to.

    Advantages of Solar Energy:
    1.Saves you money
    2.Environmentally friendly
    3.Independent/ semi-independent
    4. Low/ no maintenance

    Disadvantages of Solar Energy:
    1.Solar panels require quite a large area for installation to achieve a good level of efficiency.
    2.The efficiency of the system also relies on the location of the sun, although this problem can be overcome with the installation of certain components.
    3.The production of solar energy is influenced by the presence of clouds or pollution in the air.



    ♥>>>ann margaret<<<♥

    ReplyDelete
  44. Hydroelectric Power Plant

    Hydroelectricity is one of the main forms of energy in use today. Its use is being promoted in many countries of the world as a renewable and non-polluting source of energy. The industrialized nations of the world have drawn flak in recent times for releasing high concentrations of green house gases into the atmosphere. The regulations of the Kyoto Protocol are making things tougher. Hence greater interest is being shown in making use of non-polluting energy sources.
    Hydroelectricity is electricity generated by hydropower, i.e., the production of power through use of the gravitational force of
    falling or flowing water. It is the most widely used form of renewable energy. Once a hydroelectric complex is constructed, the
    project produces no direct waste, and has a considerably different output level of the greenhouse gas carbon dioxide (CO2) than
    fossil fuel powered energy plants. Worldwide, hydroelectricity supplied an estimated 715,000 MWe in 2005. This was approximately
    19% of the world's electricity (up from 16% in 2003), and accounted for over 63% of electricity from renewable sources.Some jurisdictions do not consider large hydro projects to be a sustainable energy source, due to the human, economic
    and environmental impacts of dam construction and maintenance.

    Functioning of a hydroelectric power plant

    Hydroelectricity is produced in a hydroelectric power plant. In this plant, the water is released from a high location. The potential energy present in the water is converted into kinetic energy, which is then used to rotate the blades of a turbine. The turbine is hooked to the generator which produces electricity.

    The main components of hydroelectric power plant are:

    a) The reservoir: Water from a natural water body like a river is stored in the reservoir. This reservoir is built at a level higher than the turbine.

    b) The dam: The flow of water stored in the reservoir is obstructed by huge walls of the dam. This prevents the water from flowing and helps us harness the energy present in it. The dam consists of gates present at its bottom, which can be lifted to allow the flow of water through them.

    c) The penstock: This connects the reservoir with the turbine propeller and runs in a downward inclined manner. When the gates of the dam are lifted, the force of gravity makes the water flow down the penstock and reach the blades of the turbine. As the water flows through the penstock, the potential energy of water stored in the dam is converted into kinetic energy.

    d) The turbine: The kinetic energy of the running water turns the blades of the turbine. The turbine can be either a Pelton Wheel Model or a Centrifugal type. The turbine has a shaft connected to the generator.

    e) The generator: A shaft runs from the turbine to the generator. When the blades of the turbine rotate, the shaft turns a motor which produces electric current in the generator.

    f) Power lines: The power produced in the generator is sent to various power distribution stations through the power lines.

    After passing through the turbine, the water flows through an outlet pipe called the tailrace and is released into the river downstream of the power plant.

    ReplyDelete
  45. ♥Sound♥

    Sound is a series of longitudinal or compression waves that move through air or other materials. Sound does not travel in a vacuum. Sound is made by air vibrating. The same is true for sounds made by musical instruments. The difference between NOISE and MUSIC is that musical sounds are organized into patterns that have pitch and rhythm. Noise is just random, disorganized sounds. Sounds are made and travel in the same way whether they are musical sounds or noise.A musical sound is called a tone, and is produced by air vibrating a certain number of times per second. These vibrations are called waves. These sound waves must be contained in some way so that the performer can control the loudness, quality of the tone, and how long it plays. Most musical instruments have a reed, a string, or some other device that creates sound waves when moved. Sounds are different because of harmonics, which are higher and quieter sounds mixed in. They are not heard separately, but add to the tone of the sound, making an oboe sound different from a trumpet or drum.The number of times that a sound wave vibrates in a second is called its frequency. Scientists even have a name for how they measure the frequency of sounds. They measure it in cycles and call it hertz. High notes have a higher frequency than lower notes and this changes their shape. Different types of sound waves have different shapes.

    Sound is a regular mechanical vibration that travels through matter as a waveform. It consists of longitudinal or compression waves in matter.Although it is commonly associated in air, sound will readily travel through many materials, such as water and steel. Some insulating materials absorb much of the sound waves, preventing the waves from penetrating the material. But sound does not travel in vacuum Because sound is the vibration of matter, it does not travel through a vacuum or in outer space.Sound waves are different than light waves.Light and radio waves are electromagnetic waves. They are completely different than sound, which is vibration of matter. Electromagnetic waves are related to electrical and magnetic fields and readily travel through space.

    Characteristics of sound:
    1.Wavelength
    2.Speed or velocity
    3.Frequency
    4.Amplitude

    Sound consists of longitudinal or compression waves that move through air or other materials. It does not travel in a vacuum. Sound has the characteristics of wavelength, frequency, speed and amplitude. Sound waves are created by the vibration of some object and are detected when they cause a detector to vibrate.

    ♥>>>teejay sulla<<<♥

    ReplyDelete
  46. Bipolar (junction) Transistor
    _____________________________

    A bipolar (junction) transistor (BJT) is a three-terminal electronic device constructed of doped semiconductor material and may be used in amplifying or switching applications. Bipolar transistors are so named because their operation involves both electrons and holes. Charge flow in a BJT is due to bidirectional diffusion of charge carriers across a junction between two regions of different charge concentrations. This mode of operation is contrasted with unipolar transistors, such as field-effect transistors, in which only one carrier type is involved in charge flow due to drift. By design, most of the BJT collector current is due to the flow of charges injected from a high-concentration emitter into the base where they are minority carriers that diffuse toward the collector, and so BJTs are classified as minority-carrier devices.

    A BJT consists of three differently doped semiconductor regions, the emitter region, the base region and the collector region. These regions are, respectively, p type, n type and p type in a PNP, and n type, p type and n type in a NPN transistor. Each semiconductor region is connected to a terminal, appropriately labeled: emitter (E), base (B) and collector (C).


    Structure

    The base is physically located between the emitter and the collector and is made from lightly doped, high resistivity material. The collector surrounds the emitter region, making it almost impossible for the electrons injected into the base region to escape being collected.

    "The bipolar junction transistor, unlike other transistors, is usually not a symmetrical device. This means that interchanging the collector and the emitter makes the transistor leave the forward active mode and start to operate in reverse mode".


    2 Types of Bipolar Transistor
    _____________________________

    NPN
    The symbol of an NPN Bipolar Junction Transistor.

    NPN is one of the two types of bipolar transistors, in which the letters "N" and "P" refer to the majority charge carriers inside the different regions of the transistor. Most bipolar transistors used today are NPN, because electron mobility is higher than hole mobility in semiconductors, allowing greater currents and faster operation.

    1. NPN transistors consist of a layer of P-doped semiconductor (the "base") between two N-doped layers. A small current entering the base in common-emitter mode is amplified in the collector output. In other terms, an NPN transistor is "on" when its base is pulled high relative to the emitter.

    The arrow in the NPN transistor symbol is on the emitter leg and points in the direction of the conventional current flow when the device is in forward active mode.

    "One mnemonic device for identifying the symbol for the NPN transistor is "not pointing in, or 'not pointing, no' "


    2. PNP - The other type of BJT is the PNP with the letters "P" and "N" referring to the majority charge carriers inside the different regions of the transistor.

    PNP transistors consist of a layer of N-doped semiconductor between two layers of P-doped material. A small current leaving the base in common-emitter mode is amplified in the collector output. In other terms, a PNP transistor is "on" when its base is pulled low relative to the emitter.

    The arrow in the PNP transistor symbol is on the emitter leg and points in the direction of the conventional current flow when the device is in forward active mode.

    "One mnemonic device for identifying the symbol for the PNP transistor is "pointing in proudly, or 'pointing in - pah'."


    ....hope that we will learn more after we read this very interesting topic!!.. -'bye"..

    ReplyDelete
  47. telephone
    The telephone (from the Greek: τῆλε, tēle, "far" and φωνή, phōnē, "voice") is a telecommunications device that transmits and receives sound, most commonly the human voice. It is one of the most common household appliances in the developed world, and has long been considered indispensable to business, industry and government. The word "telephone" has been adapted to many languages and is widely recognized around the world.

    The device operates principally by converting sound waves into electrical signals, and electrical signals into sound waves. Such signals when conveyed through telephone networks — and often converted to electronic and/or optical signals — enable nearly every telephone user to communicate with nearly every other worldwide

    Credit for the invention of the electric telephone is frequently disputed, and new controversies over the issue have arisen from time-to-time. As with other great inventions such as radio, television, light bulb, and computer, there were several inventors who did pioneering experimental work on voice transmission over a wire and improved on each other's ideas. Innocenzo Manzetti, Antonio Meucci, Johann Philipp Reis, Elisha Gray, Alexander Graham Bell, and Thomas Edison, among others, have all been credited with pioneering work on the telephone. An undisputed fact is that Alexander Graham Bell was the first to be awarded a patent for the electric telephone by the United States Patent and Trademark Office (USPTO) in March 1876. That first patent by Bell was the master patent of the telephone, from which all other patents for electric telephone devices and features flowed.
    The early history of the telephone became and still remains a confusing morass of claims and counterclaims, which were not clarified by the huge mass of lawsuits that hoped to resolve the patent claims of many individuals and commercial competitors. The Bell and Edison patents, however, were forensically victorious and commercially decisive.
    A Hungarian engineer, Tivadar Puskás quickly invented the telephone switchboard in 1876, which allowed for the formation of telephone exchanges, and eventually networks.

    ReplyDelete
  48. A traditional landline telephone system, also known as "plain old telephone service" (POTS), commonly handles both signaling and audio information on the same twisted pair of insulated wires: the telephone line. Although originally designed for voice communication, the system has been adapted for data communication such as Telex, Fax and Internet communication. The signaling equipment consists of a bell, beeper, light or other device to alert the user to incoming calls, and number buttons or a rotary dial to enter a telephone number for outgoing calls. A twisted pair line is preferred as it is more effective at rejecting electromagnetic interference (EMI) and crosstalk than an untwisted pair. The telephone consists of an alerting device, usually a ringer, that remains connected to the phone line whenever the phone is "on hook", and other components which are connected when the phone is "off hook". These include a transmitter (microphone), a receiver (speaker) and other circuits for dialing, filtering, and amplification. A calling party wishing to speak to another party will pick up the telephone's handset, thus operating a button switch or "switchhook", which puts the telephone into an active (off hook) state by connecting the transmitter (microphone), receiver (speaker) and related audio components to the line. This circuitry has a low resistance (less than 300 Ohms) which causes DC current (48 volts, nominal) from the telephone exchange to flow through the line. The exchange detects this DC current, attaches a digit receiver circuit to the line, and sends a dial tone to indicate readiness. On a modern push-button telephone, the calling party then presses the number buttons in a sequence corresponding to the telephone number of the called party. The buttons are connected to a tone generator circuit that produces DTMF tones which end up at a circuit at the exchange. A rotary dial telephone employs pulse dialing, sending electrical pulses corresponding to the telephone number to the exchange. (Most exchanges are still equipped to handle pulse dialing.) Provided the called party's line is not already active or "busy", the exchange sends an intermittent ringing signal (about 90 volts AC in North America and UK and 60 volts in Germany) to alert the called party to an incoming call. If the called party's line is active, the exchange sends a busy signal to the calling party. However, if the called party's line is active but has call waiting installed, the exchange sends an intermittent audible tone to the called party to indicate an incoming call. The phone's ringer is connected to the line through a capacitor, a device which blocks the flow of DC current but permits AC current. This constitutes a mechanism whereby the phone draws no current when it is on hook, but exchange circuitry can send an AC voltage down the line to activate the ringer for an incoming call. When a landline phone is inactive or "on hook", the circuitry at the telephone exchange detects the absence of DC current flow and therefore "knows" that the phone is on hook with only the alerting device electrically connected to the line. When a party initiates a call to this line, and the ringing signal is transmitted. When the called party picks up the handset, they actuate a double-circuit switch hook which simultaneously disconnects the alerting device and connects the audio circuitry to the line. This, in turn, draws DC current through the line, confirming that the called phone is now active. The exchange circuitry turns off the ring signal, and both phones are now active and connected through the exchange.

    ReplyDelete
  49. . A rotary dial telephone employs pulse dialing, sending electrical pulses corresponding to the telephone number to the exchange. (Most exchanges are still equipped to handle pulse dialing.) Provided the called party's line is not already active or "busy", the exchange sends an intermittent ringing signal (about 90 volts AC in North America and UK and 60 volts in Germany) to alert the called party to an incoming call. If the called party's line is active, the exchange sends a busy signal to the calling party. However, if the called party's line is active but has call waiting installed, the exchange sends an intermittent audible tone to the called party to indicate an incoming call. The phone's ringer is connected to the line through a capacitor, a device which blocks the flow of DC current but permits AC current. This constitutes a mechanism whereby the phone draws no current when it is on hook, but exchange circuitry can send an AC voltage down the line to activate the ringer for an incoming call. When a landline phone is inactive or "on hook", the circuitry at the telephone exchange detects the absence of DC current flow and therefore "knows" that the phone is on hook with only the alerting device electrically connected to the line. When a party initiates a call to this line, and the ringing signal is transmitted. When the called party picks up the handset, they actuate a double-circuit switch hook which simultaneously disconnects the alerting device and connects the audio circuitry to the line. This, in turn, draws DC current through the line, confirming that the called phone is now active. The exchange circuitry turns off the ring signal, and both phones are now active and connected through the exchange. The parties may now converse as long as both phones remain off hook. When a party "hangs up", placing the handset back on the cradle or hook, DC current ceases to flow in that line, signaling the exchange to disconnect the call. Calls to parties beyond the local exchange are carried over "trunk" lines which establish connections between exchanges. In modern telephone networks, fiber-optic cable and digital technology are often employed in such connections. Satellite technology may be used for communication over very long distances

    ReplyDelete
  50. Early commercial instruments


    Modern emergency telephone powered by sound alone.
    Early telephones were technically diverse. Some used a liquid transmitter, some had a metal diaphragm that induced current in an electromagnet wound around a permanent magnet, and some were "dynamic" - their diaphragm vibrated a coil of wire in the field of a permanent magnet or the coil vibrated the diaphragm. The dynamic kind survived in small numbers through the 20th century in military and maritime applications where its ability to create its own electrical power was crucial. Most, however, used the Edison/Berliner carbon transmitter, which was much louder than the other kinds, even though it required an induction coil, actually acting as an impedance matching transformer to make it compatible to the impedance of the line. The Edison patents kept the Bell monopoly viable into the 20th century, by which time the network was more important than the instrument.
    Early telephones were locally powered, using either a dynamic transmitter or by the powering of a transmitter with a local battery. One of the jobs of outside plant personnel was to visit each telephone periodically to inspect the battery. During the 20th century, "common battery" operation came to dominate, powered by "talk battery" from the telephone exchange over the same wires that carried the voice signals.
    Early telephones used a single wire for the subscriber's line, with ground return used to complete the circuit (as used in telegraphs). The earliest dynamic telephones also had only one port opening for sound, with the user alternately listening and speaking (or rather, shouting) into the same hole. Sometimes the instruments were operated in pairs at each end, making conversation more convenient but also more expensive.

    ReplyDelete
  51. At first, the benefits of a telephone exchange were not exploited. Instead telephones were leased in pairs to a subscriber, who had to arrange for a telegraph contractor to construct a line between them, for example between a home and a shop. Users who wanted the ability to speak to several different locations would need to obtain and set up three or four pairs of telephones. Western Union, already using telegraph exchanges, quickly extended the principle to its telephones in New York City and San Francisco, and Bell was not slow in appreciating the potential.
    Signalling began in an appropriately primitive manner. The user alerted the other end, or the exchange operator, by whistling into the transmitter. Exchange operation soon resulted in telephones being equipped with a bell, first operated over a second wire, and later over the same wire, but with a condenser (capacitor) in series with the bell coil to allow the AC ringer signal through while still blocking DC (keeping the phone "on hook"). Telephones connected to the earliest Strowger automatic exchanges had seven wires, one for the knife switch, one for each telegraph key, one for the bell, one for the push-button and two for speaking.
    Rural and other telephones that were not on a common battery exchange had a magneto or hand-cranked generator to produce a high voltage alternating signal to ring the bells of other telephones on the line and to alert the operator.


    A U.S. candlestick telephone in use, circa 1915.
    In the 1890s a new smaller style of telephone was introduced, packaged in three parts. The transmitter stood on a stand, known as a "candlestick" for its shape. When not in use, the receiver hung on a hook with a switch in it, known as a "switchhook." Previous telephones required the user to operate a separate switch to connect either the voice or the bell. With the new kind, the user was less likely to leave the phone "off the hook". In phones connected to magneto exchanges, the bell, induction coil, battery and magneto were in a separate bell box called a "ringer box." [3] In phones connected to common battery exchanges, the ringer box was installed under a desk, or other out of the way place, since it did not need a battery or magneto.

    ReplyDelete
  52. Cradle designs were also used at this time, having a handle with the receiver and transmitter attached, separate from the cradle base that housed the magneto crank and other parts. They were larger than the "candlestick" and more popular.
    Disadvantages of single wire operation such as crosstalk and hum from nearby AC power wires had already led to the use of twisted pairs and, for long distance telephones, four-wire circuits. Users at the beginning of the 20th century did not place long distance calls from their own telephones but made an appointment to use a special sound proofed long distance telephone booth furnished with the latest technology.
    What turned out to be the most popular and longest lasting physical style of telephone was introduced in the early 20th century, including Bell's Model 102. A carbon granule transmitter and electromagnetic receiver were united in a single molded plastic handle, which when not in use sat in a cradle in the base unit. The circuit diagram of the Model 102 shows the direct connection of the receiver to the line, while the transmitter was induction coupled, with energy supplied by a local battery. The coupling transformer, battery, and ringer were in a separate enclosure. The dial switch in the base interrupted the line current by repeatedly but very briefly disconnecting the line 1-10 times for each digit, and the hook switch (in the center of the circuit diagram) disconnected the line and the transmitter battery while the handset was on the cradle.
    After the 1930s, the base also enclosed the bell and induction coil, obviating the old separate ringer box. Power was supplied to each subscriber line by central office batteries instead of a local battery, which required periodic service. For the next half century, the network behind the telephone became progressively larger and much more efficient, but after the dial was added the instrument itself changed little until American Telephone & Telegraph Company (AT&T) introduced Touch-Tone dialing in the 1960s.

    ReplyDelete
  53. Digital telephony
    Main article: Digital Telephony
    The Public Switched Telephone Network (PSTN) has gradually evolved towards digital telephony which has improved the capacity and quality of the network. End-to-end analog telephone networks were first modified in the early 1960s by upgrading transmission networks with T1 carrier systems, designed to support the basic 3 kHZ voice channel by sampling the bandwidth-limited analog voice signal and encoding using PCM. While digitization allows wideband voice on the same channel, the improved quality of a wider analog voice channel did not find a large market in the PSTN.
    Later transmission methods such as SONET and fiber optic transmission further advanced digital transmission. Although analog carrier systems existed that multiplexed multiple analog voice channels onto a single transmission medium, digital transmission allowed lower cost and more channels multiplexed on the transmission medium. Today the end instrument often remains analog but the analog signals are typically converted to digital signals at the (Serving Area Interface (SAI), central office (CO), or other aggregation point. Digital loop carriers (DLC) place the digital network ever closer to the customer premises, relegating the analog local loop to legacy status.
    IP telephony


    Hardware-based IP phone.
    Internet Protocol (IP) telephony (also known as Voice over Internet Protocol, VoIP), is a disruptive technology that is rapidly gaining ground against traditional telephone network technologies. As of January 2005, up to 10% of telephone subscribers in Japan and South Korea have switched to this digital telephone service. A January 2005 Newsweek article suggested that Internet telephony may be "the next big thing. As of 2006 many VoIP companies offer service to consumers and businesses.
    IP telephony uses an Internet connection and hardware IP Phones or softphones installed on personal computers to transmit conversations encoded as data packets. In addition to replacing POTS (plain old telephone service), IP telephony services are also competing with mobile phone services by offering free or lower cost connections via WiFi hotspots. VoIP is also used on private networks which may or may not have a connection to the global telephone network.
    IP telephones have two notable disadvantages compared to traditional telephones. Unless the IP telephone's components are backed up with an uninterruptible power supply or other emergency power source, the phone will cease to function during a power outage as can occur during an emergency or disaster, exactly when the phone is most needed. Traditional phones connected to the older PSTN network do not experience that problem since they are powered by the telephone company's battery supply, which will continue to function even if there's a prolonged power black-out. A second distinct problem for an IP phone is the lack of a 'fixed address' which can impact the provision of emergency services such as police, fire or ambulance, should someone call for them. Unless the registered user updates the IP phone's physical address location after moving to a new residence, emergency services can be, and have been, dispatched to the wrong location.

    ReplyDelete
  54. Usage


    Fixed telephone lines per 100 inhabitants 1997-2007
    By the end of 2006, there were a total of nearly 4 billion mobile and fixed-line subscribers worldwide. This included 1.27 billion fixed-line subscribers and 2.68 billion mobile subscribers.
    Telephone operating companies
    Main article: List of telephone operating companies
    In some countries, many telephone operating companies (commonly abbreviated to telco in American English) are in competition to provide telephone services. The above Main article lists only facilities based providers and not companies which lease services from facilities based providers in order to serve their customers.

    ReplyDelete
  55. VECTOR QUANTITY

    Vectors have both magnitude and direction. The length of a vector represents magnitude. The arrow shows direction. EO 1.1 DEFINE the following as they relate to vectors: a. Scalar quantity b. Vector quantity Scalar Quantities Most of the physical quantities encountered in physics are either scalar or vector quantities. A scalar quantity is defined as a quantity that has magnitude only. Typical examples of scalar quantities are time, speed, temperature, and volume. A scalar quantity or parameter has no directional component, only magnitude. For example, the units for time (minutes, days, hours, etc.) represent an amount of time only and tell nothing of direction. Additional examples of scalar quantities are density, mass, and energy. Vector Quantities A vector quantity is defined as a quantity that has both magnitude and direction. To work with vector quantities, one must know the method for representing these quantities. Magnitude, or "size" of a vector, is also referred to as the vector's "displacement." It can be thought of as the scalar portion of the vector and is represented by the length of the vector. By definition, a vector has both magnitude and direction. Direction indicates how the vector is oriented relative to some reference axis, as shown in Figure 1. Using north/south and east/west reference axes, vector "A" is oriented in the NE quadrant with a direction of 45 north of the o EW axis. G iving direction to scalar "A" makes it a vector. The length of "A" is representative of its magnitude or displacement.





    by;jenny valandra

    ReplyDelete
  56. Diffusion is a time-dependent process, constituted by random motion of given entities and causing the statistical distribution of these entities to spread in space. The concept of diffusion is tied to notion of mass transfer, driven by a concentration gradient.
    The concept of diffusion emerged in the physical sciences. The paradigmatic examples were heat diffusion, molecular diffusion and Brownian motion. Their mathematical description was elaborated by Joseph Fourier in 1822, Adolf Fick in 1855 and by Albert Einstein in 1905.
    Applications outside physics were pioneered by Louis Bachelier who in 1900 used a random walk model to describe price fluctuations on financial markets. In a less quantitative way, the concept of diffusion is invoked in the social sciences to describe the spread of ideas

    In molecular diffusion, the moving entities are small molecules. They move at random because they frequently collide. Diffusion is the resulting net transport of molecules from a region of higher concentration to one of lower concentration. Brownian motion is observed in molecules that are so large that they are not driven by their own thermal energy but by collisions with solvent particles.
    While Brownian motion of large molecules is observable under a microscope, small-molecule diffusion can only be probed in carefully controlled experimental conditions. Under normal conditions, molecular diffusion is relevant only on length scales between nanometer and millimeter. On larger length scales, transport in liquids and gases is normally due to another transport phenomenon, convection.
    In contrast, heat conduction through solid media is an everyday occurrence (e.g. a metal spoon partly immersed in a hot liquid). This explains why the mathematics of diffusion has first been discovered for the transport of heat, not of mass

    ReplyDelete
  57. Marie Antonette Repuya ~IV- ROMANCE

    topic resistor

    a resistor is a two terminal electronic components that produces a voltage across its terminals that is proportional to the electric current passing through it in accordance with "OHM's law"


    V=IR


    Resistors are elements of electrical networks and electronic circuits and are obiquitous in most electronic equipment practiacal resistor can be made of various compounds and films, as well as resistance wire (wire made of high resistivity alloy, such as nickel/chrome)


    the primary characteristics of a resistor the tolerance maximum working voltage and the power rating. other characteristics include temperature, coefficient,noise and inductance, less well known as critical resistance the value below wich the power dissipation limits the maximum permitted current flow and above wich applied voltage critical resistance depends upon the material constituting the resistor as well as its physical dimensions it's determine by design


    risistors can be intergrated into hybrid and printed circuits, as well as intergrated circuit. size and posistion of leads (or terminals are relevant to equipment designers, resistors must be physically large enough not to overheat when dissipating their power




    ~

    ReplyDelete
  58. by pol nunga...,

    Newton’s second law relates net force and acceleration. A net force on an object will accelerate it—that is, change its velocity. The acceleration will be proportional to the magnitude of the force and in the same direction as the force. The proportionality constant is the mass, m, of the object. F = maIn the International System of Units (also known as SI, after the initials of Système International), acceleration, a, is measured in meters per second per second. Mass is measured in kilograms; force, F, in newtons. A newton is defined as the force necessary to impart to a mass of 1 kg an acceleration of 1 m/sec/sec; this is equivalent to about 0.2248 lb.
    A massive object will require a greater force for a given acceleration than a small, light object. What is remarkable is that mass, which is a measure of the inertia of an object (inertia is its reluctance to change velocity), is also a measure of the gravitational attraction that the object exerts on other objects. It is surprising and profound that the inertial property and the gravitational property are determined by the same thing. The implication of this phenomenon is that it is impossible to distinguish at a point whether the point is in a gravitational field or in an accelerated frame of reference. Einstein made this one of the cornerstones of his general theory of relativity, which is the currently accepted theory of gravitation.

    ReplyDelete
  59. Laser

    Light amplification by stimulated emission of radiation (LASER or laser) is a mechanism for emitting electromagnetic radiation, typically light or visible light, via the process of stimulated emission. The emitted laser light is (usually) a spatially coherent, narrow low-divergence beam, that can be manipulated with lenses. In laser technology, "coherent light" denotes a light source that produces (emits) light of in-step waves of identical frequency and phase. [1] The laser’s beam of coherent light differentiates it from light sources that emit incoherent light beams, of random phase varying with time and position; whereas the laser light is a narrow-wavelength electromagnetic spectrum monochromatic light; yet, there are lasers that emit a broad spectrum light, or simultaneously, at different wavelengths.

    ReplyDelete
  60. Laser physics
    The gain medium of a laser is a material of controlled purity, size, concentration, and shape, which amplifies the beam by the process of stimulated emission. It can be of any state: gas, liquid, solid or plasma. The gain medium absorbs pump energy, which raises some electrons into higher-energy ("excited") quantum states. Particles can interact with light both by absorbing photons or by emitting photons. Emission can be spontaneous or stimulated. In the latter case, the photon is emitted in the same direction as the light that is passing by. When the number of particles in one excited state exceeds the number of particles in some lower-energy state, population inversion is achieved and the amount of stimulated emission due to light that passes through is larger than the amount of absorption. Hence, the light is amplified. By itself, this makes an optical amplifier. When an optical amplifier is placed inside a resonant optical cavity, one obtains a laser.

    ReplyDelete
  61. The light generated by stimulated emission is very similar to the input signal in terms of wavelength, phase, and polarization. This gives laser light its characteristic coherence, and allows it to maintain the uniform polarization and often monochromaticity established by the optical cavity design.

    The optical cavity, a type of cavity resonator, contains a coherent beam of light between reflective surfaces so that the light passes through the gain medium more than once before it is emitted from the output aperture or lost to diffraction or absorption. As light circulates through the cavity, passing through the gain medium, if the gain (amplification) in the medium is stronger than the resonator losses, the power of the circulating light can rise exponentially. But each stimulated emission event returns a particle from its excited state to the ground state, reducing the capacity of the gain medium for further amplification. When this effect becomes strong, the gain is said to be saturated. The balance of pump power against gain saturation and cavity losses produces an equilibrium value of the laser power inside the cavity; this equilibrium determines the operating point of the laser. If the chosen pump power is too small, the gain is not sufficient to overcome the resonator losses, and the laser will emit only very small light powers. The minimum pump power needed to begin laser action is called the lasing threshold. The gain medium will amplify any photons passing through it, regardless of direction; but only the photons aligned with the cavity manage to pass more than once through the medium and so have significant amplification.

    ReplyDelete
  62. The beam in the cavity and the output beam of the laser, if they occur in free space rather than waveguides (as in an optical fiber laser), are, at best, low order Gaussian beams. However this is rarely the case with powerful lasers. If the beam is not a low-order Gaussian shape, the transverse modes of the beam can be described as a superposition of Hermite-Gaussian or Laguerre-Gaussian beams (for stable-cavity lasers). Unstable laser resonators on the other hand, have been shown to produce fractal shaped beams.[4] The beam may be highly collimated, that is being parallel without diverging. However, a perfectly collimated beam cannot be created, due to diffraction. The beam remains collimated over a distance which varies with the square of the beam diameter, and eventually diverges at an angle which varies inversely with the beam diameter. Thus, a beam generated by a small laboratory laser such as a helium-neon laser spreads to about 1.6 kilometers (1 mile) diameter if shone from the Earth to the Moon. By comparison, the output of a typical semiconductor laser, due to its small diameter, diverges almost as soon as it leaves the aperture, at an angle of anything up to 50°. However, such a divergent beam can be transformed into a collimated beam by means of a lens. In contrast, the light from non-laser light sources cannot be collimated by optics as well.

    Although the laser phenomenon was discovered with the help of quantum physics, it is not essentially more quantum mechanical than other light sources. The operation of a free electron laser can be explained without reference to quantum mechanics.

    Modes of operation
    The output of a laser may be a continuous constant-amplitude output (known as CW or continuous wave); or pulsed, by using the techniques of Q-switching, modelocking, or gain-switching. In pulsed operation, much higher peak powers can be achieved.

    Some types of lasers, such as dye lasers and vibronic solid-state lasers can produce light over a broad range of wavelengths; this property makes them suitable for generating extremely short pulses of light, on the order of a few femtoseconds (10-15 s).

    Continuous wave operation
    In the continuous wave (CW) mode of operation, the output of a laser is relatively constant with respect to time. The population inversion required for lasing is continually maintained by a steady pump source.

    ReplyDelete
  63. Pulsed operation
    In the pulsed mode of operation, the output of a laser varies with respect to time, typically taking the form of alternating 'on' and 'off' periods. In many applications one aims to deposit as much energy as possible at a given place in as short time as possible. In laser ablation for example, a small volume of material at the surface of a work piece might evaporate if it gets the energy required to heat it up far enough in very short time. If, however, the same energy is spread over a longer time, the heat may have time to disperse into the bulk of the piece, and less material evaporates. There are a number of methods to achieve this.

    Q-switching
    Main article: Q-switching
    In a Q-switched laser, the population inversion (usually produced in the same way as CW operation) is allowed to build up by making the cavity conditions (the 'Q') unfavorable for lasing. Then, when the pump energy stored in the laser medium is at the desired level, the 'Q' is adjusted (electro- or acousto-optically) to favourable conditions, releasing the pulse. This results in high peak powers as the average power of the laser (were it running in CW mode) is packed into a shorter time frame.

    Modelocking
    Main article: Modelocking
    A modelocked laser emits extremely short pulses on the order of tens of picoseconds down to less than 10 femtoseconds. These pulses are typically separated by the time that a pulse takes to complete one round trip in the resonator cavity. Due to the Fourier limit (also known as energy-time uncertainty), a pulse of such short temporal length has a spectrum which contains a wide range of wavelengths. Because of this, the laser medium must have a broad enough gain profile to amplify them all. An example of a suitable material is titanium-doped, artificially grown sapphire (Ti:sapphire).

    The modelocked laser is a most versatile tool for researching processes happening at extremely fast time scales also known as femtosecond physics, femtosecond chemistry and ultrafast science, for maximizing the effect of nonlinearity in optical materials (e.g. in second-harmonic generation, parametric down-conversion, optical parametric oscillators and the like), and in ablation applications. Again, because of the short timescales involved, these lasers can achieve extremely high powers.

    Pulsed pumping
    Another method of achieving pulsed laser operation is to pump the laser material with a source that is itself pulsed, either through electronic charging in the case of flashlamps, or another laser which is already pulsed. Pulsed pumping was historically used with dye lasers where the inverted population lifetime of a dye molecule was so short that a high energy, fast pump was needed. The way to overcome this problem was to charge up large capacitors which are then switched to discharge through flashlamps, producing a broad spectrum pump flash. Pulsed pumping is also required for lasers which disrupt the gain medium so much during the laser process that lasing has to cease for a short period. These lasers, such as the excimer laser and the copper vapour laser, can never be operated in CW mode.

    ReplyDelete
  64. Pulsed operation
    In the pulsed mode of operation, the output of a laser varies with respect to time, typically taking the form of alternating 'on' and 'off' periods. In many applications one aims to deposit as much energy as possible at a given place in as short time as possible. In laser ablation for example, a small volume of material at the surface of a work piece might evaporate if it gets the energy required to heat it up far enough in very short time. If, however, the same energy is spread over a longer time, the heat may have time to disperse into the bulk of the piece, and less material evaporates. There are a number of methods to achieve this.

    Q-switching
    Main article: Q-switching
    In a Q-switched laser, the population inversion (usually produced in the same way as CW operation) is allowed to build up by making the cavity conditions (the 'Q') unfavorable for lasing. Then, when the pump energy stored in the laser medium is at the desired level, the 'Q' is adjusted (electro- or acousto-optically) to favourable conditions, releasing the pulse. This results in high peak powers as the average power of the laser (were it running in CW mode) is packed into a shorter time frame.

    Modelocking
    Main article: Modelocking
    A modelocked laser emits extremely short pulses on the order of tens of picoseconds down to less than 10 femtoseconds. These pulses are typically separated by the time that a pulse takes to complete one round trip in the resonator cavity. Due to the Fourier limit (also known as energy-time uncertainty), a pulse of such short temporal length has a spectrum which contains a wide range of wavelengths. Because of this, the laser medium must have a broad enough gain profile to amplify them all. An example of a suitable material is titanium-doped, artificially grown sapphire (Ti:sapphire).

    The modelocked laser is a most versatile tool for researching processes happening at extremely fast time scales also known as femtosecond physics, femtosecond chemistry and ultrafast science, for maximizing the effect of nonlinearity in optical materials (e.g. in second-harmonic generation, parametric down-conversion, optical parametric oscillators and the like), and in ablation applications. Again, because of the short timescales involved, these lasers can achieve extremely high powers.

    Pulsed pumping
    Another method of achieving pulsed laser operation is to pump the laser material with a source that is itself pulsed, either through electronic charging in the case of flashlamps, or another laser which is already pulsed. Pulsed pumping was historically used with dye lasers where the inverted population lifetime of a dye molecule was so short that a high energy, fast pump was needed. The way to overcome this problem was to charge up large capacitors which are then switched to discharge through flashlamps, producing a broad spectrum pump flash. Pulsed pumping is also required for lasers which disrupt the gain medium so much during the laser process that lasing has to cease for a short period. These lasers, such as the excimer laser and the copper vapour laser, can never be operated in CW mode.

    ReplyDelete
  65. MICROSCOPE^_^
    About 1590, two Dutch spectacle makers, Zaccharias Janssen and his son Hans, while experimenting with several lenses in a tube, discovered that nearby objects appeared greatly enlarged. That was the forerunner of the compound microscope and of the telescope. In 1609, Galileo, father of modern physics and astronomy, heard of these early experiments, worked out the principles of lenses, and made a much better instrument with a focusing device.

    Anton van Leeuwenhoek (1632-1723)
    The father of microscopy, Anton van Leeuwenhoek of Holland, started as an apprentice in a dry goods store where magnifying glasses were used to count the threads in cloth. He taught himself new methods for grinding and polishing tiny lenses of great curvature which gave magnifications up to 270 diameters, the finest known at that time. These led to the building of his microscopes and the biological discoveries for which he is famous. He was the first to see and describe bacteria, yeast plants, the teeming life in a drop of water, and the circulation of blood corpuscles in capillaries. During a long life he used his lenses to make pioneer studies on an extraordinary variety of things, both living and non living, and reported his findings in over a hundred letters to the Royal Society of England and the French Academy.

    Robert Hooke
    Robert Hooke, the English father of microscopy, re-confirmed Anton van Leeuwenhoek's discoveries of the existence of tiny living organisms in a drop of water. Hooke made a copy of Leeuwenhoek's light microscope and then improved upon his design.

    Charles A. Spencer
    Later, few major improvements were made until the middle of the 19th century. Then several European countries began to manufacture fine optical equipment but none finer than the marvelous instruments built by the American, Charles A. Spencer, and the industry he founded. Present day instruments, changed but little, give magnifications up to 1250 diameters with ordinary light and up to 5000 with blue light.

    Beyond the Light Microscope
    A light microscope, even one with perfect lenses and perfect illumination, simply cannot be used to distinguish objects that are smaller than half the wavelength of light. White light has an average wavelength of 0.55 micrometers, half of which is 0.275 micrometers. (One micrometer is a thousandth of a millimeter, and there are about 25,000 micrometers to an inch. Micrometers are also called microns.) Any two lines that are closer together than 0.275 micrometers will be seen as a single line, and any object with a diameter smaller than 0.275 micrometers will be invisible or, at best, show up as a blur. To see tiny particles under a microscope, scientists must bypass light altogether and use a different sort of "illumination," one with a shorter wavelength.

    ReplyDelete
  66. ^_^SOUND</3 Sound is a series of longitudinal or compression waves that move through air or other materials. Sound does not travel in a vacuum. Sound is made by air vibrating. The same is true for sounds made by musical instruments. The difference between NOISE and MUSIC is that musical sounds are organized into patterns that have pitch and rhythm. Noise is just random, disorganized sounds. Sounds are made and travel in the same way whether they are musical sounds or noise.A musical sound is called a tone, and is produced by air vibrating a certain number of times per second. These vibrations are called waves. These sound waves must be contained in some way so that the performer can control the loudness, quality of the tone, and how long it plays. Most musical instruments have a reed, a string, or some other device that creates sound waves when moved. Sounds are different because of harmonics, which are higher and quieter sounds mixed in. They are not heard separately, but add to the tone of the sound, making an oboe sound different from a trumpet or drum.The number of times that a sound wave vibrates in a second is called its frequency. Scientists even have a name for how they measure the frequency of sounds. They measure it in cycles and call it hertz. High notes have a higher frequency than lower notes and this changes their shape. Different types of sound waves have different shapes.

    Sound is a regular mechanical vibration that travels through matter as a waveform. It consists of longitudinal or compression waves in matter.Although it is commonly associated in air, sound will readily travel through many materials, such as water and steel. Some insulating materials absorb much of the sound waves, preventing the waves from penetrating the material. But sound does not travel in vacuum Because sound is the vibration of matter, it does not travel through a vacuum or in outer space.Sound waves are different than light waves.Light and radio waves are electromagnetic waves. They are completely different than sound, which is vibration of matter. Electromagnetic waves are related to electrical and magnetic fields and readily travel through space.

    Characteristics of sound:
    1.Wavelength
    2.Speed or velocity
    3.Frequency
    4.Amplitude

    Sound consists of longitudinal or compression waves that move through air or other materials. It does not travel in a vacuum. Sound has the characteristics of wavelength, frequency, speed and amplitude. Sound waves are created by the vibration of some object and are detected when they cause a detector to vibrate.

    ReplyDelete
  67. VIBRATING STRINGS

    Vibration refers to mechanical oscillations about an equilibrium point. The oscillations may be periodic such as the motion of a pendulum or random such as the movement of a tire on a gravel road.Vibration is occasionally "desirable".

    in a string
    Strings (music)
    A string is the vibrating element that is the source of vibration in string instruments, such as the guitar, harp, piano, and members of the violin family. They are lengths of a flexible material kept under tension so that they may freely vibrate.

    -Wave-
    A wave is a disturbance that propagates through space and time, usually with transference of energy. A mechanical wave is a wave that propagates or travels through a medium due to the restoring forces it produces upon deformation. There also exist waves capable of traveling through a vacuum.

    Usually a vibrating string produces a sound

    -Sound-
    Sound is a travelling wave which is an oscillation of pressure transmitted through a solid, liquid, or gas, composed of frequencies within the range of hearing and of a level sufficiently strong to be heard, or the sensation stimulated in organs of hearing by such vibrations.- Perception of sound.

    Frequency
    Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as temporal frequency.The period is the duration of one cycle in a repeating event, so the period is the reciprocal of the frequency.

    Pitch (music)
    Pitch represents the perceived fundamental frequency of a sound. It is one of the three major auditory attributes of sounds along with loudness and timbre. When the actual fundamental frequency can be precisely determined through physical measurement, it may differ from the perceived pitch because

    Vibrating strings are the basis of any string instrument

    -String instrument-
    A string instrument is a musical instrument that produces sound by means of vibrating strings. In the Hornbostel-Sachs scheme of musical instrument classification, used in organology, they are called chordophones. The most common string instruments in the string family are guitar, violin, viola,guitar.

    -Guitar-
    The guitar is a musical instrument with ancient roots that adapts readily to a wide variety of musical styles. It typically has six strings, but four-, seven-, eight-, ten-, eleven-, twelve-, thirteen- and eighteen-string guitars also exist. The size and shape of the neck and the base of the guitar.

    -Cello-
    The cello is a bowed string instrument. The word derives from the Italian violoncello. A person who plays a cello is called a cellist. The cello is used as a solo instrument, in chamber music, and as a member of the string section of an orchestra.

    Piano
    The piano is a musical instrument which is played by means of a keyboard. Widely used in Western music for solo performances, ensemble use, chamber music, and accompaniment, the piano is also very popular as an aid to composing and rehearsal.

    Observing string vibrations

    One can see the waveforms on a vibrating string if the frequency is low enough and the vibrating string is held in front of a CRT
    Cathode ray tube
    The cathode ray tube is a vacuum tube containing an electron gun and a fluorescent screen, with internal or external means to accelerate and deflect the electron beam, used to create images in the form of light emitted from the fluorescent screen.

    Screen such as one of a television or a computer (not of an oscilloscope).
    This effect is called the stroboscopic effect, and the rate at which the string seems to vibrate is the difference between the frequency of the string and the refresh rate of the screen. The same can happen with a fluorescent lamp.

    by: Honey Grace Vitug

    ReplyDelete
  68. -------------------TOPIC----------------------

    =================NET FORCE====================

    A net force, Fnet = F1 + F2 + … (also known
    as a resultant force) is a vector produced when two or more forces { F1, F2, … } act upon a single object. It is calculated by vector addition of the force vectors acting upon the object. A net force can also be defined as the overall force acting on an object.
    Figure 1: Vectors in the same direction

    When force A and force B act on an object in the same direction (parallel vectors), the net force (C) is equal to A + B, in the direction that both A and B point.
    Figure 2: Vectors in the opposite direction

    When force A and force B act on an object in opposite directions (180 degrees between then - anti-parallel vectors), the net force (C) is equal to |A - B|, in the direction of whichever one has greater absolute value ("greater magnitude").

    (Note: The illustration assumes that the object, in this case a square, has no center of mass and can be treated like a point.)
    Figure 3: Parallelogram construction for adding vectors

    When the angle between the forces is anything else, then the component forces must be added up using the parallelogram rule.

    For example, see Figure 3. This construction has the same result as moving F2 so its tail coincides with the head of F1, and taking the net force as the vector joining the tail of F1 to the head of F2. This procedure can be repeated to add F3 to the resultant F1 + F2, and so forth. Figure 4 is an example.
    Figure 4: Vectors in different directions

    Simply, net force is the total amount of force acting upon an object. For example: two people are pushing a box. Person one pushes with a total of 50 N. Person two pushes with 50 N as well. The total net force acting on that box is 100 N.


    Posted by: Melvin Napoles

    mOmO insyd<-----

    ReplyDelete
  69. Jhon Chavez

    The conservation of energy is a fundamental concept of physics along with the conservation of mass and the conservation of momentum. Within some problem domain, the amount of energy remains constant and energy is neither created nor destroyed. Energy can be converted from one form to another (potential energy can be converted to kinetic energy) but the total energy within the domain remains fixed.

    Today, conservation of “energy” refers to the conservation of the total system energy over time. This energy includes the energy associated with the rest mass of particles and all other forms of energy in the system. In addition the invariant mass of systems of particles (the mass of the system as seen in its center of mass inertial frame, such as the frame in which it would need to be weighed), is also conserved over time for any single observer, and (unlike the total energy) is the same value for all observers. Therefore, in an isolated system, although matter (particles with rest mass) and "pure energy" (heat and light) can be converted to one another, both the total amount of energy and the total amount of mass of such systems remain constant over time, as seen by any single observer. If energy in any form is allowed to escape such systems (see binding energy) the mass of the system will decrease in correspondence with the loss.
    A consequence of the law of energy conservation is that perpetual motion machines can only work perpetually if they deliver no energy to their surroundings. If such machines produce more energy than is put into them, they must lose mass and thus eventually disappear over perpetual time, and are therefore not possible.

    ReplyDelete