42091210229051065199.png O Axis Mundi: Unified Theory Foundations

Unified Theory Foundations

Introduction to the Unified Theory

The Unified Field Theory is sometimes called the Theory of Everything (TOE, for short): the long-sought means of tying together all known phenomena to explain the nature and behaviour of all matter and energy in existence. The advantage of a unified theory over many fragmented theories is that a unified theory often offers a more elegant explanation of data , and may point towards future areas of study as well as predict natures laws. Quoting Prof. Stephen Hawking from his book 'A Brief History of Time', we read one important warning: "..if we do discover a complete theory, it should in time be understandable in broad principle by everyone, not just a few scientists..". Unfortunately, the chances that mainstream knowledge will ever lead us to such a theory are getting smaller and smaller. Tesla had already warned us: "Today´s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality." Lack of experimental work has always been the main issue, with scientists preferring to discard experimental results which disagree with their theories, than the other way round. Today, we find these discarded results under the name of 'scientific anomalies', which are usually kept off from mainstream literature, or really hard to spot.

In physics, a field refers to an area under the influence of some force, such as gravity or electricity, for example. A unified field theory would reconcile seemingly incompatible aspects of various field theories, to create a single comprehensive set of equations. Such a theory could potentially unlock all the secrets of nature and make a myriad of wonders possible, including such benefits as new machine concepts and inexhaustible sources of clean energy, among many others.

For example, in 1861-65 James Maxwell Clerk explained the interrelation of electric and magnetic fields in his unified theory of electromagnetism. Then, in 1881-84 Hertz demonstrated that radio waves and light were both electromagnetic waves, as predicted by Maxwell's theory. Early in the 20th century, Albert Einstein's general theory of relativity - dealing with gravitation - became the second field theory. The term unified field theory was coined by Einstein, who was attempting to prove that electromagnetism and gravity were different manifestations of a single fundamental field. Regretably, Einstein failed in this ultimate goal. When quantum theory entered the picture, the puzzle became more complex. The theory of relativity explains the nature and behavior of all phenomena on the macroscopic level (things that are visible to the naked eye); quantum theory explains the nature and behavior of all phenomena on the microscopic (atomic and subatomic) level. Perplexingly, however, the two theories are incompatible. Unconvinced that nature would prescribe totally different modes of behavior for phenomena that were simply scaled differently, Einstein sought a theory that would reconcile the two apparently irreconcilable theories that form the basis of modern physics.
Although electromagnetism and the strong and weak nuclear forces have long been explained by a single theory, known as the standard model, gravitation does not fit into the equation. The current quest for a unified field theory (sometimes called the holy grail of physicists) is largely focused on the superstring theory and, in particular, on an adaptation known as M-theory.

This theory aims at providing an explanation for all known forces and physical effects using the same language, and showing that everything is made up of the same elementary entity. Physicists hope that a Grand Unified Theory will unify the strong, weak, and electromagnetic interactions. There have been several proposed Unified Theories, but we need data to pick which, if any, of these theories describes nature. All the interactions we observe are all different aspects of the same, unified interaction. However, how can this be the case if strong and weak and electromagnetic interactions are so different in strength and effect? Strangely enough, current data and theory suggest that these varied forces merge into one force when the particles being affected are at a high enough energy.
There are many things not yet properly explained by conventional physics. Forces like magnetism, static electricity and gravity need to be inter-related to a high degree, and their activities must be better understood. Also, what I call the hard particle paradigm has made such achievement much more difficult than it really is. 

The Elementary Entity
Dielectric element


Any point in space may be completely defined by its position at a particular time. Any macro entity may be considered to be made up of a very large number of smaller entities, which in turn are made up of smaller entities. This kind of entity nesting does not go to infinity, but a lower limit exist, where the entities are of the smallest possibe size, equivalent to Planck length, and we will refer to these as elementary entities. Each entity is unique, and does not need to include any 'particle properties', that is, even vacuum is composed of elementary entities. What makes one macro entity different from another, is its structural shape. The basic requirements for this entity to exist are space and time.
Space and time are themselves an electrically oscillating circuit in space, and fill our universe, including what we refer to as vacuum. These can handle different, but discrete, values of energy, frequency and polarisation values. Permittivity e defines the elasticity of the elements whilst permeability m defines their inertial property. Later on it will be mathematically shown that all these parameters boil down to different relations between space & time. Macro elements (such as electrons, atoms, and particles of 'matter') can be accomodated in such elements by filling up the above parameters with those of the macro structure. Note that a mass is not tangible, and if one has to disassemble a mass he will end up with these elementary entities which are all but electrical entities.

The hunt of scientists for the smallest existing atomic particle is a lost battle, because there is no such thing. Increasing the power of zooming in matter will just show that dielectric elements can be infinetely small, and are in fact electrical in nature. The fact that we can 'feel' most 'massive' macro entities, such as a large number of atoms, is only a reaction of forces (electrically accounted for) between our own bodies' entity structure and the object structure. We have to accept the fact that matter is just a complex structure of electromagnetic elementary entities, even the electron itself. 

Origins of Mass, Motion, and Inertia


So what makes an observable 'material' entity different from empty space? The answer to this question was given in the theory of mechanical waves pronounced in 1923 by the Nobel Prize winner Prince de Broglie. According to this theory material particles are always linked with a 'system' of travelling waves, a 'wave-packet' or standing wave, forming the constituent parts of matter and determining its movements. A priory assumption is that space is filled with travelling waves. In general these waves neutralize one another, but at certain points it happens that a great number of waves are in such a position, or structure as to reinforce one another and form a marked observable wave crest. This wave crest then corresponds to a material particle! So, the answer to our original question, is that a material entity is a structure made up of standing waves, whose origin are the travelling waves that make up the empty space.

The animation shown here is of a water surface in a closed vibrating tray, made to vibrate to different modes by simply varying the frequency. One can easiely see how a 'system of travelling waves' or better, standing waves, generate what most would call a particle, whilst in fact its just a 3D formation of standing waves. Since, however, the waves may travel in different directions they will part from one another, and the wave crest disappears to re-appear again at a nearby point. The material particle has moved, or better teleported. The same mechanism applies to electromagnetic standing waves which constitute all matter. The wave crest will thus travel in quantum steps, but the velocity with which this is done is quite different from the one with which the underlying wave systems move, that is light speed. The material particle in general moves at right angles to the surfaces of these mechanical waves, just as a ray of light is, as a rule, directed at right angles to the surface planes of the light waves. First we have to accept the fact that every existing location in space is made up of descrete electromagnetic elementary entities. Since this element is filling up a three dimensional space (volume), it has to have its own dimensions in space, with a shape or structure which leaves no discontinuities (or space time void) when neighbouring cells are surrounding it to form a bigger cell. Space time is the product of the volume taken by such element and the time for the element to go through one oscillation. To comprehend this, we must slightly change the concept of what is mass, what is a particle, thus reducing matter and vacuum to the same definition- waves. The actual structural shape is not important at this point, however it should be one which promotes cascading of similar shapes to form bigger macro oscillating structures.
Moving an object of different net electrical properties than its surroundings, means that an external force has to be applied in some way to this object in order to reconfigure the parameters of all the electromagnetic elements in front of that object to the same properties of the moving object, and at the same instant reconfigure all the elements behind the object to that of the surrounding elements. 'In front' and 'behind' being relative to the direction of motion.

From a visual point of view, what we call 'matter' is continously reconstructed in space, whether or not it has a static or dynamic position in space. Simply moving an object by just one millimetre, would mean integrating a huge but discrete number of such a process. That is, motion is not a continous process but rather a huge number of digital processes, crests of standing waves disappearing and reappearing elsewhere in space. Nothing actually moves, motion becomes an interpretation of different shapes at different times. The same applies in the spatial time dimension for stationary objects, since they are 'moving' in the time dimension. In this example, the arrow is composed of the green dielectric, and the surrounding white grid could be vacuum. Each grid size is equal to the size of one planck length, the size of the basic electromagnetic entity, so nothing can exist in between, not even space or time.
The arrow, which could be a mass or particle, is thus seen to be moving, but looking closer, one can see that this is just an optical illusion, the same illusion that makes us believe that an object moves. Reconstruction itself does not need external energy to be applied, because the energy required in front of the moving object is balanced out by that released at the rear. But if we need to change the rate of reconstruction (change in velocity or direction), then external energy would be required to create in imbalance between the front and rear electrical parameters.

This explains the fact that a stationary object needs an external energy source to start motion in space. It also explains inertia. Once a body is moving, it is reluctant to slow down unless external energy is applied (example friction). This is because once the external energy has been used to modify all parameters of EM elements within the object, the object will continue to reconstruct in any location in space with those parameters, until another external energy is applied. This is why a stationary or static object remains still, and a moving object remains moving. It also explains why when a rotating object is no longer restricted in moving round by its centripetal force, it continues to move in a straight line tangent to the point it left on its circular path - it just keeps on the last reconstruction parameters. Note that the above motion properties work linearly only in a uniform time-space volume, that is while the object is travelling through a uniform EM field, similar to the grid shown in the arrow model above. 

Gravity explained


It is frightening to think that we are ourselves being reconstructed all the time by dielectric elements, and you may ask, what about if I am not reconstructed during the next second? Or, what if a mistake happens during reconstruction? The answer is that unless there is a space-time void or discontinuity, reconstruction is a perfect process done with no energy input expense. Input energy may however effect reconstruction, for example in a growing living cell, or an accelerating field. Continous space-time volume can also be considered to be elastic, and if a dielectric volume has a non-linear shape, it will need external energy to be applied to a moving object within it, or on the opposite, release energy during reconstruction. This means that there would be an imbalance of forces between the electromagnetic elements in front and those behind the object. In such cases, objects at rest may not remain at rest, and moving objects may not continue in their straight path & constant velocity. The force we call gravity is one such case. In such a field, the energy needed to reconstruct an object in the higher flux density direction is less than that required to reconstruct the surrounding dielectric behind the object. Thus an object will move with no external energy applied towards the higher flux density.

To visualise the effect of non-linear electromagnetic element volume (space-time) at a centre of gravity, imagine the surface of a rubber sheet with a uniform grid drawn on it, and visualise the grid when the rubber is pulled down at a point below its surface. Such bending of space-time is a result of this non-linearity of the parameters present in the dielectric volume. One method of generating a non-linear dielectric volume is to expose the whole dielectric volume under concern to a non -linear electric field, with the 'centre of gravity' being the centre of highest electric field flux density.

An example of this is our planet, which has a non-linear electric field gradient with its highest gradient near the surface. Linear gravity does not exist, gravitational force is always non-linear (an-isotropic) pointing towards its centre. That is earth's g=9.8 at ground level, but decreases at higher altitudes. Linear gravity results in a linear space-time and is the same as zero gravity. Similarly, an electromagnetic element exposed to a linear force field will reconstruct the objects in it at zero energy transfer. However, when exposed to a non-linear force field, an object moving within it will experience a force imbalance in the direction of the highest force flux density. So the attraction of matter to centres of gravity is not a result of matter itself, but of the spacetime 'stretching' and 'compression' infront and behind the moving object. A massless dielectric, that is space itself, would still be 'accelerated' towards the point of easier reconstruction. The mass movement is just an indication of movement of its electromagnetic constituents.

The dual nature of light and matter

Four thousand years ago, Democritis created the point particle of mass to represent the fundamental elements of tangible matter. This concept was satisfactory until about 1900, when quantum properties of matter were found. Then, puzzles, problems, and paradoxes appeared, because most properties of matter derive from the wave structure of particles. Democritis couldn't know this, and until recently few persons challenged this concept, now embedded as a paradigm into mainstream science. Nevertheless, Schroedinger, deBroglie, Dirac, and Einstein, the founders of the quantum theory, preferred a wave structure of matter, and in the last decades researchers have experimentally validated their intuition. Unfortunately however, mainstream science is still stuck within the 'hard particle' paradigm and prefers to use the term 'wave particle duality', instead of clearly establishing that 'hard particles' are nothing but an old scientific misinterpretation, due to our sense of touch. Once the wave structure of matter is accepted, most physics as we know it will automatically collapse, and that will be the start of a completely new unified science.

The notion of wave-particle duality, states that an electron, for example, may sometimes act like a wave and sometimes like a particle. Conventional physics explains that when electrons are excited, packets of quanta are released as electromagnetic radiation and then 'hit' matter, the same way a ball hits a wall, resulting in kinetic and heat energy at the target. It also states that Energy=hf where h is Planck's constant (h = 6.63E-34 Js), that is the frequency band is not continuous, but rather exists in digital steps (quantised) in exact factors of h. This wave-particle duality characteristic, together with the uncertianty principle and quantisation, have totally upset most of the past well known physicists, and although both wave and particle characteristics have been experimentally confirmed, it is still very unclear what's the mechanism undergoing within matter or wave.
antiparticles Some physicists, attempting to unify gravity with the other fundamental forces, have come to a startling prediction: every fundamental matter particle should have a massive "shadow" force carrier particle, and every force carrier should have a massive "shadow" matter particle. This relationship between matter particles and force carriers is called supersymmetry. For example, for every type of quark there may be a type of particle called a "squark." Again, such reasoning is highly distorted due to the hard particle paradigm. In another separate experiment, researchers led by Valery Nesvizhevsky at the Laue-Langevin Institute in France, isolated hundreds of neutrons from all major effects except gravity, then watched them in a special detector as gravity pulled them down. It was not a smooth fall! As expected by the standing wave theory, the neutrons fell in quantum jumps. This confirms that particle motion in the macro world is not a continuous process! So we see that hard particles and their motion gets quite weird, with quantum movement and imaginary components to describe their motion. This is a strong evidence that the reality based on hard particles moving in space is totally wrong, and has resulted into a whole mess of incompatible scientific fields. Contrary to what our senses make us believe, no experiment has ever shown that hard particles either exist or move! But, on the other hand, we have experiments that show particles disappearing from one place to appear in another place without 'moving' along a path, somehow like being teletransported.
For too many years people imagined atoms as point electrons orbiting around a nucleus. This myth, obviously imitating our planetary system, was shown wrong by quantum theory more than sixty years ago, and despite this fact, it is still the first basic model that students are exposed to in some schools. For example, in the hydrogen atom, quantum theory predicts the electron presence as a symmetrical spherical cloud around the proton. Some physicists, still under the effect of the 600-year-old old myth, concluded that the point bits of matter were still there, even though quantum theory contains no notion of point particles, just because they insist that matter has to be made up of smaller and smaller matter. Actually, in the hydrogen atom both the electron wave-structure and the proton have the same center. As described in my 'Particle' section, both nucleus and electron's structure can be imagined like onion layers - spherical concentric layers of electromagnetic waves around a center. The amplitude of the EM waves decreases, as shown in the graph below. There are no point masses, no orbits - just waves.

As you can read in 'The Particle' section, the spherical standing wave concept of matter solves all Quantum Theory enigmas and more. The spherical standing wave concept, based on matter structured of concentric spherical polyhedra, avoids and explains the paradoxes and problems of hard point particles. In such a theory, mass and charge simply do not exist in nature, and eliminating them from particle structure also gets rid of their problems. As a matter of fact, this theory has only one property as a priori - space together with its built-in characteristics. Instead of mass, charge and time, we have wave nodes and their motion. Standing waves in space possess the properties of mass and charge which we observe in the macro world, but without the eternal problem of finding mass points which do not exist. This simple theory is thus valid from the quantum level to the whole universe, unifying quantum theory with nature. The overwhelming proof of the standing wave structure of matter is the discovery that all the former empirical natural laws originate from the wave structure. In this theory, all things from quantum level to the universe itself obey the same laws, and the shelled spherical polyhedra standing wave structure appears to agree with experimental observations, and can be used to explain such things as nucleus magic numbers, by simply studying their geometrical arrangement.

Matter from superposition of EM waves


The above diagram (press Forward to start Applet) shows the amplitude of EM waves as they reach the centre of the spherical wave. The right most side is the nucleus core, and the X-axis represents the distance from the core or radius of the sphere. In the top diagram, observe the wave f(x,t) moving inward. Where the radius is zero the amplitude is infinite. In the second diagram, observe the wave g(x,t) moving outward. Again, where the radius is zero, the amplitude becomes infinite. This infinity value at the core does not actually occur in practice, as the radius can never be smaller than Planck's length, so Lp-1 may never reach infinity and thus keep the model valid.

The lower diagram is the resultant of the two waves f(x,t)+g(x,t). That is, the two amplitudes are added together, by the superposition property of waves. The sum of the waves is the radial amplitude of the real electron. Watch carefully how the sum wave moves. The wave does not move inward or outward, it goes up and down. That is, it becomes a standing wave. Standing waves with nodes fixed in space, made up of incoming and outgoing EM travelling waves - and this is what we usually call particles of matter. The nucleus, electron, and all the myriad of other particles, are just structures of standing spherical quantum waves.

A wave may either be a travelling wave or a standing wave which is fixed in space. This means that matter is a structure of EM waves, not just a simple concentration of EM waves, but a tuned standing wave structure. In this respect Einstein's equation E=mc2 is quite misleading, because the equation, although mathematically correct, gives no indication of the structure of E in order to get a resulting mass. In fact, any attempt to concentrate huge quantities of energy to generate mass have been a failure. A resonant standing wave is a priori to generating any form of matter from pure energy, and we all know that the building block of a standing wave are in- and out-going waves. Once this matter standing wave structure is broken into smaller structures or even destroyed, the EM elements making it up are released and detected as travelling EM waves or other 'chunks' of smaller standing waves. What all these new ideas seem to suggest is that physical objects (matter), or even reality itself (things in motion), are not at all what everyone had supposed they are 4000 years ago, and it is surely about time that current science makes up for this, even if this comes at the cost of rebuilding from scratch science itself.

In this picture, anything in existence in our 3D universe forms part of a single entity, a single, though complex, pattern of standing waves. The outward waves from a body evoked a response of the universe; that is, the production of inward waves from reflecting bodies elsewhere in the universe. However, the reflected waves began before the moment of acceleration and before arrival of the source waves. The combined waves themselves are the particles, and no point mass or charge is needed. Every charged particle is a structural part of the universe, and the whole universe contributes to each charged particle. Every particle sends quantum waves outward, and receives an inward response wave from the universe. Inward and outward waves, although both spherical, are not the same function, in fact they are different for each particle. Although the variety of molecules and materials populating the universe is enormous, the building bricks are just two, a spherical In-Wave and a spherical Out-Wave.

The first hint of the mechanism of cosmological energy transfer was Ernst Mach's observation in 1883. He noticed that the inertia of a body depended on the presence of the visible stars. He asserted that "Every local inertial frame is determined by the composite matter of the universe" and jokingly, "When the subway jerks, it is the fixed stars that throw us down." How can information travel from here to the stars and back again in an instant? Machs' principle was criticised because it appeared to predict instantaneous action-at-a-distance across empty space. As Einstein observed; "Forces acting directly and instantaneously at a distance, as introduced to represent the effects of gravity, are not in character with most of the processes familiar to us from everyday life." Space is not empty because, although not observed easily, it is the quantum wave medium produced by waves from every particle in the universe, as predicted by Machs' Principle long ago. The energy exchange of inertia, charge, and other forces are mediated by the presence of the space medium. There is no need to 'travel' across the universe. Special relativity is founded on the basis of the law of the constancy of the velocity of light. But the general theory of relativity cannot retain this law. On the contrary, according to this latter theory the velocity of light must always depend on the coordinates when a gravitational field is present."

The spherical IN and OUT waves of the source and receiver oscillate in two-way communication , until a minimum amplitude condition is obtained (i.e. Resonant Coupling). The decrease of energy (frequency) of the source will equal the increase of energy of the receiver. Thus energy is conserved. E-M waves are observed as a large number of such quantum changes.

Three hundred years ago, Christiaan Huygens, a Dutch mathematician, found that if a surface containing many separate wave sources was examined at a distance, the combined wavelets appeared as a single wave front. This wave front is termed a Huygens Combination of the separate wavelets. This mechanism is the origin of the in-waves, whereby our In-Waves are formed from a Huygens' Combination of the Out-Waves of all the other matter in the universe. This occurs throughout the universe, so that every particle depends on all others to create its in-wave. We have to think of each particle as inextricably joined with other matter of the universe. Although particle centers are widely separated, all particles are one unified structure. Thus, we are part of a unified universe, and the universe is part of us. 

Standard units to Spacetime conversion table
Leading the way to unification Created:12/2/05

Abstract

This paper shows that all measurable quantities we learn in physics can be represented as nothing more than a number of spatial dimensions differentiated by a number of temporal dimensions or vice versa. To convert between such space-time system of units and the conventional SI system, one simply multiplies the ST numerical values by dimensionless constants in order to convert between the natural space-time units and the 'historical' SI units. Once the ST system of units presented here is applied to any set of physics parameters, one is then able to derive all laws and equations without reference to the original theory which presented said relationship. In other words, all known principles and numerical constants which took hundreds of years to be discovered, like Ohm's Law, energy mass equivalence, Newton's Laws, etc.. would simply follow naturally from the spatial and temporal dimensions themselves, and can be derived without any reference to standard theoretical background. Any relation between physical parameters one might think of, can be derived. Included is a step by step worked example showing how to derive any free space constant and quantum constant.

Dimensions and dimensional analysis

One of the most powerful mathematical tools in science is dimensional analysis. Dimensional analysis is often applied in different scientific fields to simplify a problem by reducing the number of variables to the smallest number of "essential" parameters. Systems which share these parameters are called similar and do not have to be studied separately. Most often then not, two apparently different systems are shown to obey the same laws and one of them can be considered to be analogous to the other.

Unfortunately, the term 'dimension', has two completely different meanings, both of which are going to be used in this paper, so the reader should be aware of both meanings in order to apply the correct meaning of the word according to the context in which it is being used. In mathematics the 'dimension' of a space is roughly defined as the mimimum number of coordinates needed to specify every point within it. For example the square has two dimensions since two coordinates, say x and y, can be used to specify any point within it. A cube has three dimensions since three coordinates, say x,y, and z, are enough to specify any point in space within it. In engineering and physics terminology, the term 'dimension' relates to the nature of a measurable quantity. In general, physical measurements that must be expressed in units of measurement, and quantities obtained by such measurements are dimensionful. Quantities like ratios and multiplying factors, with no physical units assigned to them are dimensionless. An example of a dimension is length, expressed in units of length, the meters, and an example of a dimensionless unit is Pi. An engineering dimension can thus be a measure of a corresponding mathematical dimension, for example, the dimension of length is a measure of a collection of small linked lines of unit length, which have a single dimension, and the dimension of area is a measure of a collection or grid of squares, which have two dimensions. Similarly the mathematical dimension of volume is three. The prefix 'hyper-' is usually used to refer to the four (and higher) dimensional analogs of three-dimensional objects, e.g. hypercube, hypersphere...

The dimension of a physical quantity is the type of unit, or relation of units, needed to express it. For instance, the dimension of speed is distance/time and the dimension of a force is mass×distance/time². Conventionally, we know that in mechanics, every physical quantity can be expressed in terms of MLT dimensions, namely mass, length and time or alternatively in terms of MLF dimensions, namely mass, length and force. Depending on the problem, it may be advantageous to choose one or the other set of fundamental units. Every unit is a product of (possibly fractional) powers of the fundamental units, and the units form a group under multiplication.

In the most primitive form, dimensional analysis is used to check the correctness of algebraic derivations: in every physically meaningful expression, only quantities of the same dimension can be added or subtracted. The two sides of any equation must have the same dimensions. Furthermore, the arguments to exponential, trigonometric and logarithmic functions must be dimensionless numbers, which is often achieved by multiplying a certain physical quantity by a suitable constant of the inverse dimension.

The Buckingham p theorem is a key theorem in dimensional analysis. The theorem states that the functional dependence between a certain number n of variables can be reduced to the number of k independent dimensions occurring in those variables to give a set of p = n - k independent, dimensionless numbers. A dimensionless number is a quantity which describes a certain physical system and which is a pure number without any physical units. Such a number is typically defined as a product or ratio of quantities which DO have units, in such a way that all units cancel. A system of fundamental units (or sometimes fundamental dimensions) is such that every other unit can be generated from them. The kilogram, metre, second, ampere, Kelvin, mole and candela are supposed to be the seven fundamental units, termed SI base units; other units such as the newton, joule, and volt can all be derived from the SI base units and are therefore termed SI derived units. The choice of dimensionless units is not unique: Buckingham's theorem only provides a way of generating sets of dimensionless parameters, and will not choose the most 'physically meaningful'.

Why not choose SI ?

SI dimensions We know that measurements are the backbone of science. A lot of work has been done to get the present self-coherent SI system of physical parameters, so why not choose SI as the foundation of a unifying theory? Because if the present science is not leading to unification, it means that something in its foundations is really wrong, and where else to start searching if not in its measuring units. The present SI system of units have been laid out over the past couple of centuries while the same knowledge that generated them in the first place have changed, making the SI system more or less a database of historical units. The major fault in the SI system can be easily seen in the relation diagram shown here, officially issued by BIPM (Bureau International des Poids et Mesures). We just added the 3 green arrows for the Kelvin unit. One would expect to see the seven base units totally isolated, with arrows pointing radially outwards towards derived units, instead, what we get is a totally different picture. Here we see that the seven SI base units are not even independent, but totally interdependent like a web, and so do not even strictly qualify as fundamental dimensions. If for instance, one had to change the definition of the Kg unit, we see that the fundamental units candela, mole, Amp and Kelvin would change as well. In the original diagram issued by BIPM, the Kelvin was the only isolated unit, but as I will describe shortly, it should be well interconnected as shown by the additional green arrows. So one cannot say there are seven fundamental SI units if these units are not independent of each other. The other big fault is the obvious redundancy of units. Although not very well known to all of us, at least two of the seven base units of the SI system are officially known to be redundant, namely the mole and the candela. These two units have been dragging along, ending up in the SI system for no reason other than historic ones.
The mole is merely a certain number of atoms or molecules, in the same sense that a dozen is a number; there is no need to designate this number as a unit.
The candela is an old photometric unit which can easily be derived from radiometric units (radiated power in Watts) by multiplying it by a function to describe the optical response of the human eye. The candela unit, together with its derived units as lux and foot-candelas serve no purpose that is not served equally well by watt per steradian and its derivatives.
Temperature, is yet another base unit that can be made redundant by adopting new definitions for its unit. Temperature could be measured in energy units because, according to the equipartition theorem, temperature is proportional to the energy per degree of freedom. It is also known that for a monatomic ideal gas the temperature is related to the translational motion or average speed of the atoms. The kinetic theory of gases uses statistical mechanics to relate this motion to the average kinetic energy of atoms and molecules in the system. For this case 11605 degrees Kelvin corresponds to an average kinetic energy of one electronvolt, equivalent to 1.602E-19Joules. Hence the Kelvin could also be defined as a derived unit, equivalent to 1.3806E-23Joule per degree of freedom, having the same dimensions of energy. Every temperature T has associated with it a characteristic amount of energy kT which is present in surroundings with that temperature at the quantum and molecular levels. At any given temperature the characteristic energy E is given by kT, where k (=1.3806E-23m2kg/sec2/K) is Boltzmann constant which is nothing more than a conversion factor between characteristic energy and temperature. Temperature can be seen as an alternative scale for measuring that characteristic energy. The Joule is equivalent to Kg/m2/sec2, so for the Kelvin unit we had to add the three green arrows pointing from Kg, metres and seconds which are the SI units defining energy. Furthermore, the definitions of the supplementary units, radian and steradian, are gratuitous. These definitions properly belong in the province of mathematics and there is no need to include them in a system of physical units. So what are we left with? How many dimensions can the SI system be reduced to? Looking again at the SI relations diagram, let us see which units DO NOT depend on others, that is which are those having only outgoing arrows and no incoming arrows. We see that in the SI system, only the units Seconds and Kg are independent. So, this means that the SI system can be reduced to no more than two dimensions, without loosing any of its physical significance of all the involved units. But we know that there are a lot of other combinations that can lead to the same number of fundamental dimensions, and that Kg and Seconds might not be the most physically meaningful independent dimensions. Strictly speaking only Space and Time are fundamental dimensions .... so what are the rest? Just patches in physics covering our ignorance, our inability to accept that point particles, with the fictitious Kg dimension, do not exist.

Present maintenance and transitions in the metric SI system of units

SI-maintenanceYes, hard to believe but true! Even though such transitions are hard to implement and the inertia of the SI system of units is huge, a few transitions towards better definitions are succesfully finding their way into the present SI metric system, so all is not lost. On such idea is the transition towards definitions based solely on the unit of time, taking the atomic clock second as reference and adopted exact values of certain constants. A notable step was taken in 1983 when the meter was defined by specifying that the standard speed of light be exactly 299792458 meters per second. In 1990 the BIPM established its voltage standard by specifying that Josephson's constant be exactly 483597.9 billion cycles per second per volt. Although this standard is already in use the official definition for voltage has not yet been changed to be consistent with the method of measurement, leaving the voltage and related quantities in a state of patchwork. In 1999 the CGPM called for a redefinition of the kilogram along the lines of the 1990 standards, and the following year two leading members, Mohr and Taylor, supplied the following proposed redefinition: The kilogram is the mass of a body at rest whose equivalent energy equals the energy of a collection of photons whose frequencies sum to 135 639 274×1042 Hz. Mohr and Taylor also suggested that the larger Planck's constant be made exactly equal to 2997924582/135 639 274 × 10-42 joule second. This value follows from their suggested definition of the kilogram.

Reference: Redefinition of the kilogram: a decision whose time has come by the Institute of Physics Publishing.

Introducing the ST system of units - The Rosetta stone of a new physics

SI-ST mappingHere we will go a step further over the conventional SI dimensions and its patch work and will further reduce all scientific units into the real fundamental dimensions, namely Space (metres) and Time (seconds). As shown in this diagram, all SI units have been re-mapped onto the two fundamental units. We can therefore re-map the rest of the SI-derived units onto onto our ST system as well. At first it seemed an impossible mission, but as I went through all equations currently known, I found out that we've got a lot of different branches of science that are equivalent to each other. In this paper, space takes a slightly different meaning than the conventional three dimensional property of the universe in which matter can be located, and in fact is no longer restricted to three dimensions. One starts off with the dimensions of distance as the one dimensional unit of Space S, area becomes the 2 dimensional unit of space S2, volume becomes the 3 dimensional unit of space S3, speed is distance/time which becomes S/T. To move onward to define energy related units, I make use of the knowledge presented in the standing wave EM structure of matter, which enabled me to continue the conversion work on parameters in all the other fields. Surprisingly, once you have the ST units for mass, one is able to put up a full self-coherent table of ST dimension conversions for all known phyisical quantities, while eliminating all the non-sense webbing of the conventional SI system.

Such a table sets up a much stronger foundation for a new science, and helps you visualise how scientific parameters relate to each other through space and time. Quoting John Wheeler, "There is nothing in the world except empty curved space" and "Matter, charge, electromagnetism and other fields are only manifestations of the curvature of space." Once you grasp the whole concept, you will easily understand why RC is a time constant, why mass is a volume of energy, why f=1/2pi √(LC), and how all 'mechanical' & Newton's laws are related to electrical laws. Use this table to dimensionally check all your physics equations, and compile new ones yourself! Some will look really weird, but some will definitely make a lot of sense. Note that the SI system is not less weird, for example Resistance in SI is measured in m2Kg/sec3/Amp2 (we call this Ohms), and in that in most cases, the units will look simpler when converted to ST, in this case resistance will be measured in sec2/m3, though you cannot call this Ohms since you will require a dimensionless conversion factor. You will be able for the first time to clearly see that the ratio of Energy to mass is velocity squared (E/m = c2). Using the following table, it might also be an interesting exercise to relate different parameters through integrations or differentiations of their ST parameters. You may differentiate or integrate either with respect to S or T. This is basically the Rosetta stone translating between classical theory and the new unified physics I am hereby introducing.

Parameter Units SI units ST Dimensions
Distance S metres m S
Area A metres square m2 S2
Volume V metres cubed m3 S3
Time t seconds s T
Speed/ Velocity u metres/sec m/s ST-1
Acceleration a metres/sec2 m/s2 ST-2
Force/ Drag F Newtons Kgm/s2 TS-2
Surface Tension g Newton per meter Kg/s2 TS-3
Spring constant k Newton per meter Kg/s2 TS-3
Energy/ Work E Joules Kgm2/s2 TS-1
Power P Watts or J/sec m2 Kg/s3S-1
Density r kg/m3 kg/m3 T3 S-6
Mass m Kilogram Kg T3 S-3
Momentum p Kg metres/sec Kgm/s T2 S-2
Impulse J Newton Seconds Kg m/s T2 S-2
Moment m Newton metres m2 Kg/sec2 T S-1
Torque t Foot Pounds or Nm m2 Kg/sec2 T S-1
Angular Momentum L Kg m2/s Kg m2/s T2 S-1
Inertia I Kilogram m2 Kgm2 T3 S-1
Angular velocity/frequency w Radians/sec rad/sec T-1
Pressure/Stress P Pascal or N/m2 Kg/m/sec2 T S-4
Specific heat Capacity c J/kG/K m2/sec2/K S3 T-3
Specific Entropy J/kG/K m2/sec2/K S3 T-3
Resistance R Ohms m2Kg/sec3/Amp2 T2 S-3
Impedance Z Ohms m2Kg/sec3/Amp2 T2 S-3
Conductance S Siemens or Amp/Volts sec3 Amp2/Kg/m2 S3 T-2
Capacitance C Farads sec4Amp2/Kg/m2 S3 T-1
Inductance L Henry m2 Kg/sec2/Amp2 T3 S-3
Current I Amps Amp S T-1
Electric charge q Coulomb Amp sec S
Electric flux f Vm Volt metre T S-1
Magnetic charge qm Am Amp metre S2 T-1
Magnetic flux f Weber or Volts Sec m2 Kg/sec2/Amp T2 S-2
Magnetic flux density B Tesla /gauss/ Wb/m2 Kg/sec2/Amp T2 S-4
Magnetic reluctance R R Amp2 sec2/Kg/m2 S3 T-3
Electric flux density Jm2 Kg m4/sec2 ST
Electric field strength E N/C or V/m m Kg/sec3/Amp T S-3
Magnetic field strength H Oersted or Amp-turn/m Amp/m T-1
Poynting vector S Joule/s/m2 Kg/sec3 S-3
Frequency f Hertz sec-1 T-1
Wavelength l metres m S
Wavenumber v~ reciprocal centimetre m-1 S-1
Voltage EMF V Volts m2 Kg/sec3/Amp T S-2
Magnetic/Vector potential MMF MMF Kg/sec/Amp T2 S-3
Permittivity e Farad per metre sec4 Amp2 /Kg/m3 S2 T-1
Permeability m Henry per metre Kg m/sec2/Amp2 T3 S-4
Resistivity r Ohm metres m3Kg/sec3/Amp2 T2 S-2
Temperature T ° Kelvin K T S-1
Enthalpy H Joules Kgm2/s2 T S-1
Conductivity s Siemens per metre Sec3Amp2 /Kg/m3 S2 T-2
Thermal Conductivity W/m/° K Kg m /sec3/K S-1T-1
Thermal Resistivity ° K m/W sec3K/Kg/m ST
Thermal Conductance W/° K Kg m2 /sec3/K T-1
Thermal Resistance ° K/W sec3K/Kg/m2 T
Energy density J/m3 Kg/m/sec2 T S-4
Ion mobility m Metre2/ Volts seconds Amp sec2/Kg S4 T-2
Radioactive dose Sv Sievert or J/Kg m2/s2 S2 T -2
Dynamic Viscosity Pa sec or Poise Kg/m/s T2 S-4
Kinematic Viscosity Stoke cm2/sec S2 T-1
Fluidity 1/Pascal second m sec/Kg S4 T-2
Effective radiated power ERP Watts/m2 Kg/m/sec3 S-3
Luminance Nit Candela/m2 S-3
Radiant Flux Watts Kg.m/sec3 S-1
Luminous Intensity Candela Candela S-1
Gravitational Constant G Nm2/Kg2 m3/Kg/s2 S6 T-5
Planck Constant h Joules second Kg m2/sec T2 S-1
Coefficient of viscosity h n Kg/m/s T2 S-4
Young's Modulus of elasticity E N/m2 Kg/m/s2 T S-4
Electron Volt eV 1eV Kg m2/sec2 T S-1
Hubble constant Ho H Km/sec/Parsec T-1
Stefan's Constant s W/m2/K4 Kg/s3/m/K4 S T-4
Strain e - - S0 T0
Refractive index h - - S0 T0
Angular position rad Radians m/m S0 T0
Boltzmann constant k Erg or Joule/Kelvin Kg.m2/s2/K S0 T0
Molar gas constant R J/mol/Kelvin Kg.m2/s2/K S0 T0
Mole n Mol Kg/Kg S0 T0
Fine Structure constant a - - S0 T0
Entropy S Joule/Kelvin Kg.m2/s2/K S0 T0
Reynolds Number Re - - S0 T0
Newton Power Number Np - - S0 T0


If anyone wants to add any missing parameter or knows any known equation that invalidates any of the above conversions please let me know. Here is a simple example showing you how to validate any equation into ST dimensions:

Equation to test : Casimir force F= hcA/d4

Convert each parameter to its ST dimensions from the table:

F= force= T S-2
c= speed of light= S T-1
h= Planck's constant = T2 S-1
A= Area = S2
d= 1d space = S

So the equation becomes:

T S-2 = T2 S-1 * S T-1 * S2 * S-4= T(2-1) S(-1+1+2-4)
T S-2 = T S-2... dimensionally correct.
Where does our scientific knowledge stand ?

The above conversion table makes a few things quite obvious. Since S has been defined as space in one dimension (line), S2 defines a 2D plane, S3 defines a 3D volume, and so forth, we might wonder why terms to the 6th power should exist, and what is the significance of the negative powered dimensions.
As discussed in the section Higher dimensional space, all clues point towards an ultimate fractal spacetime dimension slightly higher than 7. So, one should really expect physical parameters with space dimensions up to 7. Now, to the difficult part... time dimensions. We are normally used to talk about 3D space + 1D time, I have also introduced n*D space + 1D time in the 'Existence of higher dimensions' section. Our mind is limited to perceive everything in one 'time vector', that is one continous time line arrow, having direction from past to future. Most of the readers that went through the mentioned sections would have probably already had a hard time trying to perceive higher space dimensions a seen from such a single time vector. However, physical parameters which effect our universe, do not neccessarily exist in this single timeline, and this can be easily seen from those parameters having powers on their T dimension different than unity. The table below, shows more clearly, all known physical parameters in terms of their space & time dimensions. As expected, all known parameters fit into a 7D Spacetime. As you know, I have often referred to a self observing universe, and the negative powered dimensions are a consequence of this observation. If the observer 'lives' in a 1D timeline, then he can observe the surrounding space with respect to T, so we are able to observe space S but also to observe S with respect to T = dS/dT = velocity = S T-1. So, you see that although we write T -1, we are here differenciating S by T+1, so the + and - only indicate in which dimension (S or T) is the observer residing. Note how the inverse relation between some of the parameters is made obvious through this matrix, for example resistance vs conductance, and dynamic viscosity vs fluidity. We now use this knowledge to conclude that all physical parameters are a combination of observing space time from different dimensions of space and time, and can all fit into a table holding 7D spacetime. The table below shows the result after crossing each box of each known physical parameter from the table above. This table actually shows us where our scientific knowledge stands, counting the checked boxes one gets only about 15% of the whole table, which took humanity a few millions of years to find out. Once we will be able to fill up the complete table, we would know how to inter-relate all dimensions of our unified universe. Up to that day, no student (or lecturer) can ever think that the existing explanations of science are final and that after reading all his textbooks or finishing his course of study, he should go away satisfied with his wisdom !

T -7 T -6 T -5 T -4 T -3 T -2 T -1 T 0 T 1 T 2 T 3 T 4 T 5 T 6 T 7
S 7 ---------------
S 6 --X------------
S 5 ---------------
S 4 -----X---------
S 3 ----XXXX-------
S 2 -----XXX-------
S 1 ---X-XXXX------
S 0 ------XXX------
S -1 ------XXXXX----
S -2 -------XXX-----
S -3 -------XXXX----
S -4 --------XXX----
S -5 ---------------
S -6 ----------X----
S -7 ---------------

Science tail chasing
... the mechanism that guarantees getting to nowhere

The space-time conversion table shown in the previous page, is a great leap towards unification, and makes obvious the redundancy of the conventional scientific laws just by a general approach to its foundations - its measuring system. If the measuring system of a science is full of redundant units, then, it surely means that much of the laws based on those units are redundant or circular.

The notion of redundancy of the scientific laws has been well expressed by the late Professor JL Synge and made public in the series of lectures at the Dublin Institute of Advanced Studies delivered in 1949. Quoting Synge in the following passage:

..... Thought is difficult and painful. The difficulties and pain are due to confusion. From time to time, with enormous intellectual effect, someone creates a little order - a small spot of light in the dark sea of confusion. At first we are all dazzled by the light because we are used to living in the darkness. But when we regain our senses and examine the light we find it comes from a farthing candle - the candle of common sense. To change the metaphor, the sages chase their own tails through the ages. A little child says 'Gentlemen, you are chasing your own tails.' The sages gradually lose their angular momentum, and, glancing over their shoulders, see what they are persuing. But most of them cannot believe what they see, and the tail chasing does not die out until a generation has passed.....

Forty years ago Schroedinger wrote (in his article recently reprinted in the Special Issue 1991 of Scientific American, "Science in the 20th century", p.16):

"Fifty years ago science seemed on the road to a clearcut answer to the ancient question which is the title of this article [Our Conception of Matter]. It looked as if matter would be reduced at last to its ultimate building blocks - to certain submicroscopic but nevertheless tangible and measurable particles. But it proved to be less simple than that. Today a physicist no longer can distinguish significantly between matter and something else. We no longer contrast matter with forces or fields of force as different entities; we know now that these concepts must be merged... . We have to admit that our conception of material reality today is more wavering and uncertain than it has been for a long time. ... Physics stands at a grave crisis of ideas. In the face of this crisis, many maintain that no objective picture of reality is possible. However, the optimists among us (of whom I consider myself one) look upon this view as a philosophical extravagance born of despair. We hope that the present fluctuations of thinking are only indications of an upheaval of old beliefs which in the end will lead to something better than the mess of formulas that today surrounds our subject."

It is astonishing, but also frustrating, to see how topical are the remarks still today. Weinberg, Feynman, Wolff and certainly other well known science explorers, have more than once drawn our attention to the same inadequate foundations for natural laws.

In my ST table together with the description of the fractal model of the atom described in the particle section, I tried to show the head and tail of science. As you should have followed, the units candela, Kg, mole, Ampere and Kelvin are the teeth holding tight the tail of science. Our present science knowledge books and lectures are the force driving the circular motion of the tail chasing. The conversion table stops this vicious loop in quite an abrupt way and attempts to put back some order.

Of course, most of you do not like what they see, and argue that the tail they are chasing is not theirs. But let's stop with metaphors, and try to explain it with some elementary physics.

What looks so unconventional in the ST unification table is the fact that matter is a 3D version of energy, and that energy or 1D mass, is the inverse of velocity. Once cleared these two weird links, it becomes immediately clear that the ST table should be the real fundamental measuring system of science.

Let's start from what everybody should know: 1D space dimension S is a unit of length, and 1D time dimension T is a unit of time. It also follows that the unit of velocity should be S/T and that of acceleration is ST-2. Also the second dimension of space is not 2S but S2. Now, anybody who tried out known equations and worked out their dimensions according to the ST table, would agree, that the rest of the table is to say the least SELF COHERENT, but the link between length, time, velocity or acceleration to energy and all the rest of the parameters may not be obvious. For this analysis I've used the quite elementary yet powerful equations of motion given by Jeans J. in his introduction to the Kinetic Theory of gases, and will try to derive the mass unit in its one dimensional form, in terms of length and time.

We will here consider the impact of two elastic bodies masses m1, m2 in a simple 1 dimensional space. The velocities before impact are u1, u2 respectively. The velocities after impact are v1, v2. Since we will consider mass in one dimension (a point moving along a line), we will assume movement is taking place only in the x-direction, to the left and right. You can choose any x-direction of motion to be positive velocity and the other will be the negative.

Energy is Inverse velocity (T/S)

The following is taken from the mathematical work of Jeans J, from 'An introduction to the Kinetic theory of gases', Cambridge Univ press 1960. We'll here consider a totally isolated system, in which we know that total system momentum is conserved. The momentum lost by one object is equal to the momentum gained by another object. For collisions occurring in an isolated systems, there are no exceptions to this law.


momentum before impact = momentum after impact

m1u1 + m2u2 = m1v1 + m2v2 .....(1)

Looked at hierarchically, velocity may be viewed as existing at two levels, a high order velocity V averaged over equal intervals of time before and after impact and defined by the equation:



V = 1/2 (u1 + v1) = 1/2 (u2 + v2) .....(2)
and low order velocities obtained by subtracting the high order velocity, V, from the individual velocities, u1, u2, v1, v2:



μ1 = u1 - V .....(3)

μ2 = u2 - V .....(4)

τ1 = v1 - V .....(5)

τ2 = v2 - V .....(6)

From equation (2):



μ1 = -τ1 .....(7)

τ2 = -μ2 .....(8)

The individual velocities can now be seen as the sum of the low order, 'within batch' velocities μ1111 and the higher order, 'between batch' velocity V. Now from equations (3) to (8):



u11 + u2/(-μ2) = v1/(-τ1) + v22 .....(9)
Substituting from equations (7) & (8) and re-arranging:



(1/μ1) u1 + (1/τ1) u2 = (1/μ1) v1 + (1/τ1) v2 .....(10)



Equation (10) is isomorphic to the equation of conservation of momentum, equation(1):


m1u1 + m2u2 = m1v1 + m2v2 .....(1)


The 1D masses m1 and m2 have been replaced by the reciprocal internal 1D velocities (1/μ1) and (1/τ1). Numerically, these reciprocal terms will differ from the mass values in Kg units, for the reason that the kg SI unit is an arbitrary unit defined in 3D, whereas the reciprocal terms are in seconds per metre units. This implies that the 1D form of mass has dimensions (S/T)-1 or T/S. The concept of 3D mass can thus be replaced by the concept of reciprocal 3D internal velocity both at the macro and the micro scale, leading to a 3D mass dimension of T3/S3. The concepts of stepping up dimensions can be easily understood when one considers any spacetime unit to be a ratio of two spatial dimensions. We can easily understand that 2D space is S2, 3D space is S3. This rule applies to the spatial time dimension as well as to combination units as velocity, and mass. For example, the nth dimensional unit of a spacetime parameter SxTy will be equal to SnxTny. Thus units for different dimensions of mass will be of the form Tn1S-n1 all being the same entity in different dimensions. The Newtonian Kg is just one of these entities for the condition n=3, giving the 3D version of mass, spacetime dimension T3/S3.

From the kinetic energy equation E=1/2mv2, we get E= T3S-3*S2T-2= T/S, re-confirming Einstein's statement : 'It followed from the special theory of relativity that mass and energy are both but different manifestations of the same thing -- a somewhat unfamiliar conception for the average mind'. One could easily replace the Kg by Joules3 by simply introducing a conversion dimensionless factor between the two units. It is quite impressive, that we arrived to the same conclusion, without reference to Einstein's equations or special theory of relativity. All we did was in fact equating velocities in the elementary equation of conservation of momentum.
From the above we have proved that energy is a one dimensional form of mass, and that it has dimensions T/S, which are those of inverse velocity!


Replacing SI with ST units

Other ST units can be easily derived using our present knowledge as follows:

Planck constant h= E/f = T/S * T = T2/S

From E=mc2, m = E/c2 = T/S * (T/S)2 = T3S-3

For momentum = mv = T3/S3*S/T = T2S-2

For angular momentum L = mvr = T3/S3*S/T*S = T2/S ... same as Planck constant

For Moment of Inertia I=L/w = T2/S * T = T3/S

From F=ma, we get Force= T3S-3*S T-2= TS-2

Electromotive force (Voltage) = TS-2

For power, P=Fv, we get P=TS-2*S/T = S-1

For current, I= P/V = S-1/(TS-2) = S/T

For resistance, R = V/I = TS-2/ (S/T) = T2S-3

For mass flow rate mdot = dm/dt = (T3S-3)/ T = T2S-3

For Pressure = F/A = TS-2*S-2 = TS-4

Frequency = v/l= S/T*S-1= T-1

Temperature = E/k = T/S * 1 = T/S

For charge q=It = S/T * T = S

For Capacitance C = Q/V = S/(TS-2) = S3T-1

From V=L(dI/dt), Inductance L= TS-2 * T * T/S = T3/S3...same as mass!



Interesting things to note:

Newtons law: F=ma=m(dv/dt)

Comparing with V= L(dI/dt), where voltage has the same dimensions of force, Inductance the same dimensions of mass and current same dimensions of velocity. It is clear that the equations are actually the same, and that V=L(dI/dt) is actually Newtons law of motion.


Power = Force * Velocity

Comparing with Power = V*I, where voltage has the same dimensions as force, and current has dimensions of velocity. Again it's the same equation.


Kinetic energy = 1/2mv2

Energy stored in inductor = 1/2LI2, where L has dimensions of mass and I of velocity.


Work (Energy) = Force * distance
Compare with Energy = Vq, where voltage has dimensions of force, and charge dimensions of length.


Now compare the time constant of a simple pendulum given by (L/g)1/2
If we replace pendulum length L by charge (dimension S), and gravitational acceleration by (a=dv/dt=dI/dt) current acceleration, we have:
Time constant = (qT/I)1/2... but q=CV and R=V/I so:
T=(RCT)1/2
(T)1/2= (RC)1/2
T=RC... time constant for RC circuit...derived from mechanical pendulum


From Force = Rate of change of momentum = m (dv/dt)
Compare to EMF = L (dI/dt)... means that product LI is in fact the momentum of the electrical system.


Energy = Force * distance
Energy = ma * d
E = mvd/t .... but mvd is momentum*distance which has the same dimensions as Energy*time same as the well known quantum of action : Planck constant h, so:
E = h/t .... 1/t= frequency, thus
E = hf


From rocket equation : Thrust = Velocity * Mass flow rate

Replacing Thrust (force) by voltage, velocity by current, and mass flow rate by resistance, we get Ohms Law:
V = IR .... so, Ohms law is nothing more than the rocket thrust equation and shows that a resistor controls MASS flow rate NOT charge flow rate. This clearly shows one of the major misconceptions of the present electrical theory, in which it is assumed that a resistor has an effect on charges, which is clearly not the case. Resistance is in fact acting on the MASS of the flowing electrons and not on their charge.


A note on h and h-bar

Arguments showing why h-bar (Dirac's constant ) should NOT be used to derive Planck units

Unfortunately, a lot of scientific literature state Planck units expressed in terms of hbar (=h/(2p)) known as Dirac's constant, or the reduced Planck's constant. THIS IS INCORRECT. The 2p factor in fact leads to totally different (and wrong) numeric values for Planck units, than the original values set out by Planck himself. The 2p factor is a gratuitous addition, coming from the failure to address the Hydrogen atom's stable orbits as defined by the orbital path length being an exact multiple of the orbital matter (standing wave) wavelength.

The statement that the orbital electron's angular momentum is quantised as in:

m.v.R = n.(h/2p) = n.hbar for integer values of n, is just a mis-statement of

2p.R = n.h/(mv) .... which when substituting for h=E/f, v=f.l, and m=E/(f.l)2... we get:

2p.R = n.l ..... which means that the 2p factor has nothing to do with h as such, and that the orbital path is just an integer number of wavelengths as described by Louis De Broglie! (see diagram above). Dirac's hbar was thus defined due to lack of understanding of the wave structure of matter, and its use should be discouraged.

planck units
Some physicists still prefer to use h-bar, not for any scientific reason, but mostly for the sake of simplicity in their calculations. Their main point of view about the argument is that preferring h to h-bar amounts to preferring a circle whose circumference is 1 to a circle whose radius is 1, and that setting h equal to 1 instead of hbar = 1 amounts to working with a circle of unit circumference instead of unit radius. Though this may look simple and true when one views the problem in euclidean (plane) geometry, one has to keep in mind that the euclidean geometry is only an approximation to the properties of physical space, and Einstein showed that space gets elliptically curved (non-Euclidian) in the regions where matter is present. The shortest path in a non-euclidean space is a curved path, and though it does not seem logical, the straight line joining two points may be a longer way to go than the curved path between the same two points. The matter wave (De Broglie wave) shown above is not being forced to loop round the circle, it is just following the easiest and shortest path in its non-euclidean space. Planck's work was not about electromagnetic waves travelling in free space, in which euclidean geometry is a good approximation, but on the interaction of such waves with matter. Matter plays an important role in all Planck's work, and thus, a non-Euclidian space has to be preferred for all Planck units, and so, a circumference value must be used in favour of a radius value as the shortest length, whether or not normalised to unity.

For this reason, in all my work, I've chosen to use the original Planck units which are expressed in terms of h, Planck constant. The following derived values in fact are in perfect agreement with Planck's original values. Using the original Planck values for S (Lp) and T (tp), and simply plugging their value in the ST system of units, based on h, one can in fact DERIVE the numeric values for constants such as free space impedance, Von Klitzing constant, Quantum conductance, Josephson constant and more (see next page). If one tries to do the same thing using the numerical values for Planck's length and time based on h-bar, all derived values for the mentioned constants will be wrong! For these reasons, I can say with absolute certainty, that the Planck's values based on Dirac's h-bar are wrong and any scientific literature showing otherwise, would better revert them to the h based units, or at least make sure the readers are aware of the mentioned arguments.



The Spacetime freespace constants & Fine structure constant

In the ST system of units list we can clearly see that ALL physics constants and parameters have spacetime in common. Space and time are inter-related, in that dimension S can be differentiated (observed) by dimension T, and vice versa, depending which dimension is taken as reference by the observer. S and T however can be deduced separately, in conditions where space-time is continuous, that is everywhere, as far as we know. The whole universe can be explained in terms of these two interacting dimensions S and T which have unique values. Note that in this unification theory, unlike what we perceive as human beings, both Space and time have the same number of dimensions, and are both SPATIAL. In such a theory a volume of time T3 with respect to S for an observer in the spatial dimension S, has the same properties of a volume of space S3 with respect to T for an observer in the spatial dimension T. This may sound strange for most of us, because we are used to view the universe with respect to time, and perceive the spatial dimension T only as our temporal dimension. If you cannot grab this concept, do not worry, as you should still be able to understand the main issues. The condition for the universe to exist is that we have TWO such spatial dimensions interacting together. As we say 'It takes two to tango'.
Natural Units (also known as Planck or God's units)

Fictitious mass definition So, as we have shown in the conversion table, both Mass & Current can be reduced to space time equivalents with no requirement for any hard particle unit as the kg. However, one cannot expect to put natural values for S and T in the ST equivalent of mass and get a result in kg. The kg unit is not a natural unit, but a fictitious man made unit. It is in fact the last SI base unit to be still based on a prototype. In 1889, the 1st CGPM sanctioned the international prototype of the kilogram, made of platinum-iridium, and declared: This prototype shall henceforth be considered to be the unit of mass. The picture at the right shows the platinum-iridium international prototype, as kept at the International Bureau of Weights and Measures under conditions specified by the 1st CGPM in 1889. This is a worrying fact for NIST, and in fact, we found that resolution 7 of the 21st General Conference on Weight and Measures, had in fact called for a redefinition of the kilogram, and offered to redefine the kg as The mass of a body at rest whose equivalent energy equals the energy of a collection of photons whose frequencies sum to 135639274E42 Hz. Such redefinition has not yet taken place. In fact all physical units such as Candela, Joules, heat capacity, etc... where setup to different standards for historical reasons.

During his lifetime, Planck had derived a set of standard units. As opposed to the SI standard, these units are based on the natural constants : G (gravitational constant), h Planck constant, c Speed of light, k Boltzmann constant and permittivity. They are based on universal constants and thus known as Planck's natural units. The two basic Planck units can be easily derived from my ST conversion table as follows:


h = [k]T2/S
c = S/T
G = [1/k]S6/T5
k = kg conversion factor (read following paragraph)

So, h = [k]T/c and G = Tc6/[k]
Gh = T2c5
T = (Gh/c5)1/2

Substituting S=Tc, we get:
S = (Gh/c3)1/2

Natural Length (S)(Gh/c3)1/2 = 4.051319933E-35 m
Natural time (T)(Gh/c5)1/2 = 1.351374868E-43 s




Knowing the natural values for S and T we can now easily define a conversion ratio between the ST units and the man-made unit we call the kg. This constant works out to be equal to (hc7/G)1/2 or kQ=1.469944166E18 and is dimensionless. So :


Mass (kg) = 1.469944166E18 (T3/S3) = kQ (T3/S3)


This factor has therefore to be applied to all those units quoting the kg SI unit, for example, for Force (Newtons) we know that its SI units are kg.m/s2 so to convert the ST values into Newtons, we have to apply the same conversion equation that we use for the kg.

The above conversion constant will also be applied to energy. Although the Kelvin unit in ST has the same dimensions as energy, the conversion constant for Kelvin is not the same. We know that 11604.499 Kelvin is equivalent to 1eV, which is equal to 1.602E-19 Joules. One Kelvin is equal to 1.3806E-23 Joules, where 1.3806E-23 is Boltzmann constant k. It follows that the conversion ratio from Space Time parameters to Kelvin units is given by


Kelvin (K) = [kQ/k](T/S) .... k=Boltzmann constant


This factor has therefore to be applied to all those units quoting the Kelvin SI unit, for example, for Thermal conductivity we know that its SI units are kg.m/s3/K so to convert the ST values into SI units, we have to apply factor kQ for the kg unit and factor [kQ/k]-1 for the Kelvin-1 unit.

The ampere is the next redundant unit introduced in the SI due to lack of knowledge of the EM nature of matter. This unit is defined as that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross section, and placed 1 meter apart in vacuum, would produce between these conductors a force equal to 2 x 10-7 Newton per meter of length. In my ST conversion, current simply translates to the much neater definition of velocity of EM energy: S/T.

Now Natural Current= electron charge per unit time = q/(hG/c5)1/2 = j*(S/T)
Dimensionless conversion factor j= 3.954702562E15. So :


Current (Amps) = 3.954702562E15 (S/T) = j (S/T)





Derived Planck Units

Using the above calculated unit conversion factors, and the ST conversion table, we can derive many other natural units and constants.

For Natural length we have: Length = S = 4.05132E-35m = Planck's length, sometimes also (wrongly) quoted as S/sqrt(2pi) = 1.61624E-35m

For Natural time we have: Time = T = 1.35137E-43 sec = Planck's time, sometimes also (wrongly) quoted as T/sqrt(2pi) = 5.391E-44 sec

For Natural speed we have: Speed = S/T = 4.05132E-35/1.35137E-43 = 299.79E6m/s = speed of light

For Planck constant or Natural Angular Momentum we have: h = kQ (T2/S)

h= 1.469944166E18 * ( 1.351374868E-432/ 4.051319933E-35) = 6.626E-34 kg m2/sec

For Gravitational constant G we have G = (1/kQ)(S6/T5) works out to 6.672E-11 m3/sec2/kg
This time we used 1/kQ since we have kg-1 in the SI units of G.

Now from units of energy kg m2/s2, we know that the same constant kQ has to be applied to energy equations. So for energy we have:

E= kQ (T/S) = 1.469944166E18/ 299.792E6 = 4.9032E9 Joules = Planck energy.

For Natural mass we have: Mass = kQ(T3/S3) works out to = 5.456E-8 kg = Planck mass, sometimes also quoted as M/sqrt(2pi) = 2.17645E-8 kg

For Natural Power we have: Power = kQ(1/S) = 1.469944166E18/4.051319933E-35 = 3.6283E52 Watts = Planck Power

For Natural charge we have: Charge = j (S) = 3.954702562E15 *4.051319933E-35 = 1.602E-19C = electron charge

For Natural current we have: Current = j (S/T) = 3.954702562E15 * c = 1.18559E24 Amps = Planck current

For Natural Temperature we have : Temperature = [kQ/k](T/S) = (1.469944166E18/1.380662E-23)(1/c) = 3.551344E32 Kelvin
A comprehensive list of numeric values for all known physical units has been worked out on the next page.


The FINE STRUCTURE CONSTANT ENIGMA

So far so good, all parameters get the exact known natural values by using the derived constants k (for kg unit) and j (for Amp unit). Now for the tricky part: The free space constants. In the SI system of units we note a few units like permittivity, permeability, impedance, conductance, etc... that for some weird reason have the kg as part of their unit. For example permittivity is defined as Amp2.sec4/kg/m3, Impedance = m2kg/sec3/Amp2. Since during the development of the SI system, nobody ever wondered that the kg unit was actually representing a standing wave electromagnetic structure, we see that this unit has been applied also to units which, although represent a volume of 3D energy (T3/S3), are NOT standing waves. The space time dimensions for a 3D outgoing or incoming travelling volume of energy is the same as that of a 3D standing wave, but the conversion constant for the kg in these two cases is different.


Let us take an example to make everything clear:

We know that Freespace Impedance = 376.73 Ohms ... Radio engineers know this very well

Now the ST equivalent for Impedance = T2 S-3 and its SI units are: m2kg/sec3/Amp2

To calculate the natural Impedance, we first put in the natural values for S and T, then multiply by the kg conversion factor kQ, and divide by the square of the Amp conversion factor j.

Natural Impedance = 25812.807 Ohms, also known as Von Klitzing constant Rk.

In 1985, a German physicist Von Klitzing was awarded the Nobel Prize for Physics for his discovery that under appropriate conditions the resistance offered by an electrical conductor is quantized; that is, it varies by discrete steps rather than smoothly and continuously.

And here we have got the interesting discrepancy between Natural & Freespace impedance. This is no mathematical mistake, as we know that both the freespace impedance and the natural impedance have been experimentally confirmed under different conditions. This discrepancy comes from the fact that natural values apply to a standing wave 3D energy structures, whilst freespace impedance applies to travelling waves as we know.

Working out the ratio Z0/ZNAT = 376.7303/25812.807 = 1/68.518 = 2/137.036

I have found out that the ratio of these two impedances is given exactly by:

Free space Z0 = ZNAT * 2a

Where a is the well known Fine structure constant, given by = a= m0.c.e2/(2h)= 1/137.036

From this we deduce that although the SI system does not recognise two types of kg units (having the same dimensions T3 S-3) we have a relation between the kg used in 'matter' equations and the kg used in free space 'wave' equations:


kgfreespace/kgmatter = 2a

outgoing wave ÷ standing wave = 2a

This means that for units defining a travelling EM volume of energy, the kg conversion constant kQ, has to be multiplied by 2a. We will call this new product of constants kF denoting it for free space EM waves. Thus, for all free space parameters, we have:



kgfreespace = 2.145340167E16 (T3/S3)= kF.(T3/S3)...where kF= 2a kQ


This sheds light on the actual significance of the fine structure constant. It is well known that Alpha, the fine structure constant, which is a dimensionless number, is difficult to fit into a rational scheme of physics. Max Born stated "There seems to be little doubt that the existence of this dimensionless number, the only one that can be formed from e, c and h, indicates a deeper relation between electrodynamics and quantum theory than the current theories provide, and the theoretical determination of its numerical value is a challenge to physics." Richard Feynman (4) writes, "It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it".

Now with the aid of the unified ST table, we have a further clue on what alpha might represent. It measures the strength of the electromagnetic interaction between incoming and outgoing spherical waves within the structured standing spherical wave (or matter). It is a ratio of volume of energy between the travelling spherical waves and the standing wave EM structure. It is worth noting that the fine-structure 'constant' maintains its value as long as the entity of matter is at stand still. The effective electric charge of the electron actually varies slightly with energy so the constant changes a bit depending on the energy scale at which you perform your experiment. For example, 1/137.036 is its value when you do an experiment at very low energies (like Millikan's oil drop experiment) but for experiments at large particle-accelerator energies (like 81GeV) its value grows to 1/128. This is not the same as saying that Alpha is not constant. In fact, in April 2004, new and more-detailed observations on quasars made using the UVES spectrograph on Kueyen, one of the 8.2-m telescopes of ESO's Very Large Telescope array at Paranal (Chile), puts limits to any change in Alpha at 0.6 parts per million over the past ten thousand million years. So we might say that Alpha measured at zero Kelvin is a constant of exceptional stability. The reason for its change at high energy levels is that when the standing EM wave starts radiating heat (EM waves), part of the electron's internal EM energy starts travelling outwards, and the travelling wave conversion constant kF changes. If the standing wave is somehow changed all into pure travelling waves, this constant will increase to unity, and thus kF and kQ will be equal, and so kgfreespace will be equal to kgmatter. This is the main reason why forces seem to unify at high energy levels as shown below:

merging force
Fine structure constant is one of the most wonderful physical constants, a = 1 / 137.036.. The quantity a was introduced into physics by A. Sommerfeld in 1916 and in the past has often been referred to as the Sommerfeld fine-structure constant. It splits some spectral lines in hydrogen atom such that DE = (a/4)2Ei. In order to explain the observed splitting or fine structure of the energy levels of the hydrogen atom, Sommerfeld extended the Bohr theory to include elliptical orbits and the relativistic dependence of mass on velocity. The quantity a, which is equal to the ratio ve/c where ve is the velocity of the electron in the first circular Bohr orbit and c is the speed of light in vacuum, appeared naturally in Sommerfeld's analysis and determined the size of the splitting or fine-structure of the hydrogenic spectral lines. a is simply the ratio of the circumference of the first circular Bohr orbit to the electromagnetic wavelength of the electron's internal energy E=mec2. It is the ratio between the two fundamental velocities c the speed (S/T) of EM energy in free space and ac, the speed (S/T) in the quantum world. Feynman wrote:

There is a most profound and beautiful question associated with the observed coupling constant, e the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to -0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the "hand of God" wrote that number, and "we don't know how He pushed his pencil."

Let's now consider:

Classical radius of an electron re=2.8179403E-15 m

Compton wavelength of an electron, lc=2.42631021E-12 m

Bohr radius of an electron, ao=5.29177208E-11 m

Rydberg constant, Ryd=10973731.5685 m-1.

In order to see the relation between each of the above radii and wavelengths we must express these values in a similar form, for example, as wavelengths or orbit circumferences:

lclass = 2pre = 1.77056410E-14 m
lCompton = 2.42631021E-12 m
lBohr = 2pao= 3.32491846E-10 m
lRydberg/2 = 1 / (2Ryd) = 4.55633525275E-08 m

The numerical values for wavelengths clearly show that:

lclass / lCompton = lCompton / lBohr = lBohr / lRydberg/2 = a.


We can also work out the frequencies from f= c/l

fclass = c/2pre = 1.693203E22Hz
fCompton = c/2.42631021E-12 m = 1.23559E20Hz
fBohr = c/2pao= 3.32491846E-10 m= 9.016536E17Hz
fRydberg/2 = c / (2Ryd) = 4.55633525275E-08 m = 6.57968E15Hz


So we have a similar relation for frequencies:

fRydberg/2/fBohr = fBohr/fCompton = fCompton/fclass = a


Knowing that Energy E=hf, we get the following energy values:

Eclass = hc/2pre = 1.121946E-11 J
ECompton = hc/2.42631021E-12 m = 8.187236E-14 J
EBohr = hc/2pao= 5.974515E-16 J
ERydberg/2 = hc / (2Ryd) = 4.359811E-18 J


ERydberg/2/EBohr = EBohr/ECompton = ECompton/Eclass = a


We usually define one wavelength of motion around a circle as 2pr, and one cycle of travelling wave l as going through 2p radians. However, it is well known that in standing waves, the distance from node to node at its fundamental resonant frequency does not occur at 2p, but rather at p. This explains the factor of 2 attached to a. Thus we can re-write our previous kg units comparison as:


kgtravellingwave/kgstandingwave = a
..... confirmed above for T3S-3 (3D energy)
ERydberg/2/EBohr = a
...... 1D Energy form T/S
EBohr/ECompton = a
..... 2D Energy form T2S-2
ECompton/EClass = a
..... 3D Energy form T3S-3

From the above we see that the relations for the different energy units of Rydberg, Bohr, Compton and classical orbits obey the same relation as the travelling to standing waves we described previously. Starting from the simplest 1D form of energy T/S denoted by its energy ERydberg, we see that EBohr should represent its standing wave. But we also see that the same standing wave EBohr of this level, becomes the travelling wave of the next level, to create the next higher dimension of standing wave energy ECompton in 2D (on a surface). In turn, ECompton (the photon) becomes the travelling wave of the next level, to create the next dimension standing wave EClassic, the electron! Since photons obey all freespace equations, whilst the electron obeys the natural laws of matter, it means that this dimension level is the same as the 3D energy level T3S-3, and that the previous two, are in fact the 1D and 2D versions of energy. Looking at the ST table, we see that T/S is usually manifested as energy, T2S-2 is manifested as momentum, and T3S-3 as 3D mass, but all three are actually different manifestations of mass or energy.

2d->3d by superposition

Notice how the standing wave of two plane 2D waves can generate a 3D rotating wave. You have to visualise the blue standing wave as rotating about its axis, in and out of the page. This would become the travelling wave in the next dimension; 3D. It now becomes clear that each energy, for example ECompton can exist as a travelling wave in 3D and also as a standing wave in 2D. This solves the enigma for the wave-particle duality of light. Light will behave as a travelling wave in 3D, but will act as a standing wave (perceived as momentum) when it is projected on a 2D surface, that is, when hitting the surface of a target or sensor.


Tweaking the a fine structure constant using common sense

Since there is no theoretical way to derive the exact value for the a constant, this is usually done experimentally at low energy levels. Current accepted value from NIST reference is 1/137.03599911(46). But we now know something that most scientists do not know. We know that matter is the 3D version of electromagnetic energy T/S, whose structure is made up of elementary energy units connecting the nodes of their structure. We also know that the ratio of the 3D mass to 1D 'unit energy' is EClass/ERydberg/2 = a-3. Hence we know that the number of EM waves joining the structure of an elementary matter unit is equal to a-3 and should therefore be an integer. Taking the present CODATA value we get a-3 = 2573380.53. So at zero Kelvin, the real value should be higher than this. If we stick to our platonic fractal structure, we find out that any structure made up of any combination of platonic shapes will always end up with an even total number of elements, so it makes sense we select 2573382 as our value for a-3, which gives us a value for a= 1/137.0360251 which is also within 1986 CODATA's margin of error, and most important an exact value theoretically derived for a temperature at absolute zero Kelvin. Note that since this tweaking method does not involve other parameters such as gravitational constant, electron charge etc... all of which are known to have limited accuracy, the value obtained does not suffer from inaccuracies of other constants as does the NIST derivation. Also, note that in no experiment is a measured directly, but is always a product of other measured parameters, and always measured above zero Kelvin. Now the biggest challenge left, is to show which 3D structure of 1D EM energy units is composed of exactly 2573382 elements.


SourceValue for a
CODATA 19861/137.03598(95)
Michael Wales1/137.0359896 (exact)
KR/VN-1998137.0359853(82)
LAMPF-1999137.0359932(83)
CODATA 19991/137.03599976(50)
CODATA 20021/137.03599911(46)
Dr.M.Geilhaupt1/137.03603
I. Gorelik & Steven Harris1/137.036020454
Ing. Xavier Borg1/137.0360251 (exact)


Derivation of Free space constants

I will now reconfirm the above relation between travelling and standing wave energy factor Alpha by deriving all the free space parameters:

Plugging in these values according to the space-time dimensions given in the table, we get:


Free space speed = S/T = 299.792458E6 m/s = speed of light

Free space impedance = [kF/j2]*T2 S-3 = 376.7303 Ohms

Free space conductance = S3 T-2/[kF/j2] = 2.6544E-3 Siemens

Free space permittivity = [j2/kF]*S2 T-1 = 8.854187E-12 F/m

Free space permeability = [kF/j2]*T3 S-4 = 1.256637E-6 H/m

The above values agree with the known values for these parameters and thus re-confirm the correctness of my ST system of units.


The Universal Limits

Using this unified theory of spacetime units, we find that the calculated units above, coincide exactly to the well accepted constants found in all conventional physics textbooks and define free space (at least all free space that we can account for till now). Since all accepted physics laws conform to the conversion table, we now have the advantage to go further to deduce some more interesting data for free space. So is free space (vacuum) a sea of energy, or can we get a value for the power and frequency we can get from the so called vacuum energy / ether energy / ZPE / radiant energy? Is there a limit to the electromagnetic spectrum? Is there a limit to the maximum density of matter? The answers are positive, and can easily be worked out using the spacetime conversion for power:

Free space power limit Po= kF /S = 5.2968E50 Watts

Free space electromagnetic frequency limit fo = 1/T = 7.39987E42 Hz = Planck frequency

Free space grand unification energy limit Eo = kF T/S = 71.56085E6 Joules or 4.466477E17 GeV ...energy where all forces unify

Maximum permissible mass density = kQ T3 S-6 = 8.208E95 kg/m3

Free space Entropy S = [kQ/(kQ/k)] T0 S0 = +1.380662E-23

Free space power is the maximum rate of transfer of energy that can flow through freespace at any point in space or time. These units clearly show the existence and values for the upper boundaries for power, EM spectrum frequency, grand unification energy and mass density anywhere in the universe. Note that kF relates to the fine structure constant, Planck constant and gravitational constant. These values are thus relating the quantum relativistic physics of electromagnetism to quantum gravity.

So, of particular interest is the derivation of the Energy of Unification from my work, which would also equate to the typical energy of a vibrating string in string theory:

Eunification (eV) = (2a/e) * sqrt(hc5/G) = 44.66477x1016 GeV 

Comprehensive list of scientific constants
for the Unified ST system of units

Slowly but surely, active minorities such as the string theorist and cosmologists have started to abondon the metric SI units in favor of systems which very indeed resemble the ST system of units being proposed here, which in a way can be considered a continuation of Planck's work. Powers of ten can be introduced to make the values more practical, and as I have already introduced in this system, conversion constants will be introduced to provide exact metric conversion for those who at first find it odd to measure everthing in terms of length and time. This kind of work does happen every not-so often in science, and we are reaching the point where science will come to a halt if we do not. Judging from his 1899 paper in which he proposed them, Planck actually seems to have exactly this idea in mind.

Natural Length (S)(Gh/c3)1/2 = 4.051319933E-35m
Natural time (T)S/c = 1.351374868E-43 sec
Fine Structure constanta= 1/137.03599911
Free space kg conversion factorkF = kg:(T3S-3) = 2a(hc7/G)1/2= 2.145340167E16
Quantum kg conversion factorkQ = kg:(T3S-3) = (hc7/G)1/2 = 1.469944166E18
Amp conversion factorj = Amp:(ST-1) = e/S= 3.954702562E15
Kelvin conversion factork/k= 1.553848927E39
Radians conversion factor2pi= 6.283185307


Unified dimensions
Dimensionless Physical constants
ST to SI dimensionless conversion ratios


Unified dimensions

The unified ST system of units, as already described, is based on the two fundumental dimensions Space and Time. It may be argued that time can be defined in terms of space, in which case it becomes an extra space dimension, and our ST system unifies itself further into space dimensions. Some scientists do agree with this, some do not. A very detailed description of this is found in Relativity and dimensionality of the world. The problem of declaring time as a spatial dimension, with no other way to distinguish it from any one of the other three spatial dimensions, is that most units loose their physical interpretation, since our mind can never perceive time as length. It is important to understand that the actual values one decides to assign to these fundamental units are purely arbitrary. We selected the shown values in order to keep the equivalence of the conventional metre and second units in our ST system.


Dimensionless Physical constants

The most important of all dimensionless physical constants is the fine structure constant denoted by a whose value is NOT arbitrary and totally independent of the man made units selected. A dimensionless constant is a ratio of quantities. Even if we change the numerical values of the fundamental length, time, or any of the constants c,h,or G, the value a would remain unchanged. This is the value that makes our universe and physics laws the way they are.


Dimensionless conversion ratios

The ST to SI conversion ratios are necessary due to the huge redundancy of units in the present SI system. These ratios will convert between the ST values in metre and second units and the variety of SI units which we are used to. Conversion is not really necessary to work out any physics problem, its only use can be compared to changing a foreign currency value of money into your own currency. This would not be necessary if all people used the same currency, that is to say, if all scientists used the ST unified system of units.


How to convert between the two systems : Worked example

So, armed with our ST conversion table, which has proved itself over and over again, we are now in a position to PREDICT all physical constants, in both unified ST system of units and the messy SI system. As opposed to the SI system, all values shown above can be ASSIGNED to any precise value, suh as the ones shown. The 'error' or level of 'uncertainty' as we are accustomed to in the present SI system, becomes a thing of the past. That is, we can SET a value for natural S and T, and that would automatically set the rest of all parameters. Now, for the sake of clarity, I will explain how to work out such values using resistance as an example.

ST units voltmeter

Are you ready for this? Can you convert this reading to Volts?

Let's try to work out the constants for resistance. Since SI based quantum and SI based relativistic science are not unified in SI, mainly due to lack of knowledge on mass, we will have two types of SI constants for each fundamental physical parameter which invloves the SI kg unit, one is the free space value, the other is its quantum value. The ST system requires only one natural value for each parameter. Resistance is presently measured in Ohms, which in the SI system has units m2kg/sec3/Amp2. In the unified ST system, resistance is measured in sec2/m3 and has dimensions T2S-3. By simply plugging in the natural values of S and T, we will get the natural constant for resistance:

RST-Natural = T2S-3 = 2.746389015E17 sec2/m3

To convert this constant into the conventional SI system of units, you have to apply the conversion factors in order to convert from metres and seconds, into m2kg/sec3/Amp2. So, metres and seconds do not need any conversion, but we have to multiply by the kg conversion unit, and divide by the square of the Amp conversion unit. So we get:

RFreespace = RST-Natural * KF / j
RFreespace = 2.746389015E17 * 2.145340167E16 / 3.954702562E152
RFreespace = 376.7303135 m2kg/sec3/Amp2 or Ohms

Similarly, for the quantum constant for resitance, we do the same but apply the conversion factor KQ instead of KF

RQuantum = RST-Natural * KQ / j
RQuantum = 2.746389015E17 * 1.469944166E18 / 3.954702562E152
RQuantum = 25812.80745 m2kg/sec3/Amp2 or Ohms


You can recognise these two values as the Characteristic impedance of free space, and Von Klitzing constant RK-90. In a similar way, you can work out all physics constants and moreover, predict their values before they have been experimentally found! Below is the table of constants for all parameters which we know about, worked out in the way illustrated above. The same conversion factors kF, kQ and j, that we used in our resistance calculation, lead to the correct values of all other known constants, proving again that the ST conversion table is correct. Some of the derived constants have been discovered and are well known, while others have yet to be discovered, what we know for sure is that now we know the value of the constants before they have been discovered. All known constants agree perfectly with the predicted values.


The ST unified system constants converted to SI units

Parameter SI units ST Dims. Unified ST constant
converted to free space SI units

outgoing wave
Unified ST constant
converted to quantum SI units

standing wave
Remarks
Distance S m S 4.0513199E-35 4.0513199E-35 Planck length
Time t sec T 1.3513749E-43 1.3513749E-43 Planck time
Area A m2 S2 1.641319E-69 1.641319E-69 Planck Area
Volume V m3 S3 6.649510E-104 6.649510E-104 Planck Volume
Speed/ Velocity u m/s ST-1 299.792458E6 299.792458E6 Speed of light
Acceleration a m/s2 ST-2 2.218426E51 2.218426E51 Planck Acceleration
Force/ Drag F kgm/s2 TS-2 1.7664E42 1.2103E44 1.2103E44=Planck Force
Surface Tension g kg/s2 TS-3 4.3615E76 2.9884E78 Not yet discovered
Energy/ Work E kg m2/s2 TS-1 71.56085E6 4.903206E9 Grand Unification energy
Electron Volt eV kg m2/s2 T S-1 4.466477E26 3.060341E28 Grand Unification energy
Moment m kg m2/s2 T S-1 71.573E6 4.9041E9 See energy
Torque t kg m2/s2 T S-1 71.573E6 4.9041E9 See energy
Power P kg m2/s3S-1 5.2968E50 3.6293E52 Planck Power
Density r kg/m3 T3 S-6 1.1979E94 8.2080E95 Max Black hole density
Mass m kg T3 S-3 7.9636E-10 5.4565E-8 Not yet discovered
Momentum p kg m/s T2 S-2 0.2387 16.358 See magnetic flux
Impulse J kg m/s T2 S-2 0.2387 16.358 See magnetic flux
Angular Momentum L kg m2/s T2 S-1 9.670553E-36 6.626069E-34 Planck constant
Inertia I kg m2 T3 S-1 1.3067E-78 8.9530E-77 Not yet discovered
Angular velocity/freqw rad/sec T-1 4.6502E43 4.6502E43 Not yet discovered
Pressure/Stress P kg/m/s2 T S-4 1.0767E111 7.3770E112 Radiation pressure
Specific heat Capacity c m2/sec2/K S3 T-3 5.7814E41 3.9613E43 Not yet discovered
Specific Entropy m2/sec2/K S3 T-3 5.7814E41 3.9613E43 Not yet discovered
Resistance R kg m2/sec3/Amp2 T2 S-3 376.7303 25812.807 Freespace impedance
Von Klitzing constant RK-90
Impedance Z kg m2/s3/Amp2 T2 S-3 376.7303 25812.807 Freespace impedance
Von Klitzing constant RK-90
Conductance S s3 Amp2/kg/m2 S3 T-2 2.6544E-3 3.8740E-5 Free space conductance
Half of known conductance quantum
Capacitance C s4Amp2/kg/m2 S3 T-1 3.5871E-46 5.2353E-48 Not yet discovered
Inductance L m2 kg/s2/Amp2 T3 S-3 5.091038E-41 3.488278E-39 Not yet discovered
Current I Amp S T-1 1.18559E24 1.18559E24 Not yet discovered
Electric charge/flux q Amp sec S 1.60218E-19 1.60218E-19 Electron charge
Magnetic charge/flux f m2 kg/sec2/Amp T2 S-2 6.035885E-17 4.135667E-15 Not yet discovered
Magnetic flux density B kg/sec2/Amp T2 S-4 3.677459E52 2.519721E54 Not yet discovered
Coefficient of viscosity h kg/m/s T2 S-4 1.4543257E68 9.9647487E69 Not yet discovered
Magnetic reluctance R Amp2 sec2/kg/m2 S3 T-3 1.964236E40 2.866744E38 Not yet discovered
Electric flux density kg m4/sec2 ST 1.174542E-61 8.047727E-60 Not yet discovered
Electric field strength E m kg/sec3/Amp T S-3 1.1024745E61 7.5539348E62 Not yet discovered
Magnetic field strength H Amp/m T-1 2.926429E58 2.926429E58 Not yet discovered
Frequency f sec-1 T-1 7.399871E42 7.399871E42 Limit of EM spectrum
Wavelength l m S 4.0513199E-35 4.0513199E-35 Planck length
Voltage EMF V kg m2/sec3/Amp T S-2 4.466477E26 3.060341E28 Not yet discovered
Magnetic potential MMF kg/sec/Amp T2 S-3 1.489856E18 1.020820E20 Not yet discovered
Permittivity e sec4 Amp2 /kg/m3 S2 T-1 8.854187E-12 1.292243E-13 Permittivity of free space
Permeability m kg m/sec2/Amp2 T3 S-4 1.256637E-6 8.610226E-5 Permeability of free space
Resistivity r m3kg/sec3/Amp2 T2 S-2 1.526255E-32 1.045759E-30 Not yet discovered
Temperature T K T S-1 5.183082E30 3.551344E32 Planck Temperature
Enthalpy H kgm2/s2 T S-1 71.56085E6 4.903206E9 Not yet discovered
Conductivity s Sec3Amp2 /kg/m3 S2 T-2 6.551985E31 9.562429E29 Not yet discovered
Thermal Conductivity kg m /sec3/K S-1T-1 2.5218253E54 2.5218253E54 Not yet discovered
Energy density kg/m/sec2 T S-4 1.0761823E111 7.3737857E112 Not yet discovered
Ion mobility m Amp sec2/kg S4 T-2 2.7192689E-53 3.9686927E-55 Not yet discovered
Fluidity m sec/kg S4 T-2 6.8760389E-69 1.0035376E-70 Not yet discovered
Effective radiated power ERP kg/m/sec3 S-3 3.2263133E119 2.2106053E121 Not yet discovered
Gravitational Constant G m3/kg/s2 S6 T-5 4.573028E-9 6.674200E-11 Gravitational constant
Planck Constant h kg m2/sec T2 S-1 9.670553E-36 6.626069E-34 Planck constant
Young's Modulus E kg/m/s2 T S-4 1.0761823E111 7.3737857E112 Not yet discovered
Stefan Boltzmann constant E kg/K4/sec3 S T-4 1.3896940E-9 4.3202129E-15 =(15/2pi5) Stefan Boltzmann constant due to Riemann Zeta fn
Hertz volt relationship (Hz/V)Kj sec2Amp/kg/m2 S2 T-2 1.656758E16 2.417989E14 Half Josephson constant




SI / ST System converter

Freeware JAVA by Blaze Labs Research © 2006



Enter value in SI (or SI derivative) units

SI value SI unit:






Results

Proper SI units kg m sec Amps Kelvin Candela moles

Unified ST units m sec

Natural Unit (ST)
Natural Quantum Unit (SI)
Natural Freespace Unit (SI)


Special Relativity - Shrinking distances, time dilation, mass changes
Are these effects real as Maxwell & Einstein thought they are ?

Was Newton really in trouble?

In 1687, Newton published his laws of motion in Philosophiae Naturalis Principia Mathematica. The laws were three scientific laws, which could basically explain and predict the behaviour of moving bodies. Later on however, experiments with high velocity particles were found to give different results then those predicted by Newtons Laws. Newton seemed to be in trouble. The Kinetic energy was no longer being proportional to the velocity squared, but was heading to a much higher value. The more the particles velocity approached the speed of light c, the greater the discrepancy between the measured KE and Newton's predicted KE. At this point, Lorentz came up with the idea of a multiplying correction factor, g= 1/Ö(1-v2/c2).
relativity
This lead to the idea of relativistic mass, a mass equivalent to gmo, where mo is the rest mass. This was further developed by Einstein in his special relavity (SR) theory, which was more or less the public version of Lorentz & Maxwell's work. SR had introduced for the first time quite 'weird' effects like time dilation and increase in mass of a moving body. One of the strangest parts of special relativity as we know it today is the conclusion that two observers who are moving relative to one another, will get different measurements of the length of a particular object or the time that passes between two events. Consider two observers, each in a space-ship laboratory containing clocks and meter sticks. The space ships are moving relative to each other at a speed close to the speed of light. Using Einstein's theory, each observer will see the meter stick of the other as shorter than their own, by the same factor g. This is called length contraction. Each observer will see the clocks in the other laboratory as ticking more slowly than the clocks in his/her own, by a factor gamma. This is called time dilation. This is what special relativity predicts, and although experimental results seem to agree, everybody still feels that there is something wrong. Newton's laws became the result of SR equations for the condition g=1, and as long as the mathematical predictions were then in perfect agreement with experimental values, everyone was happy to accept the requirement for such weird effects to be part of nature, even though no logical explanation was ever found. Although this solved the discrepancy between theory and experiment, it degraded the scientific laws, as the correction factor could not be explained in terms of a physical model.


Attempting to build a physical model

In an attempt to visualise a physical model, I transfered both Newton's law and experimental results onto a geometric diagram to better interprete Lorentz factor. The below diagram has been sketched following Newton's laws, experimental evidence and common sense.

A spherical particle of mass mo leaves the source to reach its destination, distance S apart in time t. Note that although a point particle (zero dimensional object) is still accepted in most physics textbooks, it is an impossibility and cannot be used to define a particle. Its mean translational velocity is equal to v = S/t. Experimental evidence shows that this translational velocity can be in the range zero to very close to c, the speed of light, so geometrically, v can be shown as the projected shadow of velocity c, which makes at angle of q with v. And since v<=c, than c must always be the hypotenuse of triangle c,v,a.

relativity

Also, from Newtonian mechanics, we know that the total KE is equal to the sum of the body's translational kinetic energy and its rotational energy, or angular kinetic energy:

Total KE= Translational KE + Rotational KE .... If Vreal is the resultant total velocity, then

VREAL2 = v2 + Vo2 .... where v and Vo are the translational and orbiting velocities at a point in time

This relation shows us that VREAL, v, Vo form a second right angled triangle, with VREAL being the hypothenuse. The translational kinetic energy of such a moving particle is equal to 1/2mv2, where v is the linear velocity of the sphere, that is equal to the straight line distance S between source and destination, divided by the time t taken to travel through the whole path. This equation holds very well for non-relativistic mechanics, but experiments involving particles travelling at relativistic speeds, show that KE does no longer obey the equation for translational velocity v=S/t, but shoots up to infinity for v approaching c. This implies that although we 'see' the particle leave the source, and reach its destination S metres apart in t seconds, its real resultant KE is somehow not equal to the calculated translational KE 1/2mv2 or 1/2m(S/t)2. How can this be? This is where Lorentz, and Einstein had their fatal mistake. They reasoned, ..well, if KE=1/2mv2 is not being followed, and v=S/t, then, the particle's mass must be changing. As you will see, they were wrong! The mass is not changing at all, it is the real path of the particle which can no longer be approximated as a straight line, especially when v approaches c. When one looks again at the relation for VREAL their mistake becomes obvious - they assumed a zero rotational KE, that is a null Vo. The object would in fact be rotating/spinning around the path connecting source to destination, a helical path being a good example. This is the key to understand relativity. One could easiely understand how motion about its own axis can actually change its path length, whilst still reaching its destination point. The distance from the source to the destination divided by the time taken is due the translational velocity path limited to c, but the actual helical path divided by the same time taken results in a much higher velocity, not limited to c. So at any point in time, the real velocity is in fact travelling at an angle to the linear velocity v, at a higher speed. We also know that velocity of light as seen by a particle is totally independent of its real velocity, so the velocity of the real path taken by the particle is always normal to the velocity of light. So we know that c is perpendicular to VREAL. Now we also know that as v tends to zero, angle q tends to 90 degrees, and VREAL and v become almost equal meaning that they tend to become parallel to each other. As v tends to c, angle q tends to zero, and VREAL and v will approach an angle of 90 degrees to each other, whilst VREAL will grow infinitely long. This means that the angle between v and VREAL is equal to 90 - q. Since triangle v,Vreal,Vo is a right angle triangle, and the angles between vectors c & v, and Vreal & Vo are equal, then triangles 'c,a,v' is a similar triangle to trianlge v, VREAL,Vo.

Consequences of the above description:

Lorentz and Einstein were wrong in their interpretation of experimental results.
The path travelled by a particle can only be approximated as a straight line either in calculus, or as the mean velocity tends to zero. So strictly speaking a particle travels in a straight line only at v=0, in other words, a particle CANNOT travel in a perfect straight line. Nor do electromagnetic waves travel in a straight line, they only spiral along a line.
For Newtons laws of motion to apply at relativistic speeds, the velocity taken into account must NOT be the mean velocity v=S/t but the real velocity VREAL along the real particles path. It's not Newton's law which need a correction factor, we only need to take into account an orbiting velocity which is ALWAYS greater than zero.
Although the particle cannot reach its destination before an other particle which could theoretically cross the path in a straight line at the speed of light, its REAL VELOCITY along its real path, can exceed by far the speed of light. Still, its information content in the direction of the 'imaginary' straight line path cannot travel faster than light. I refer to the straight line path as imaginary for the reason that nothing is really travelling in this path, but only spiralling around it.
The velocity of light as seen by the particle real path is totally independent of the particle's velocities.



Derivation of Lorentz Factor using Newtons Laws on a body having both linear and angular velocities
Now that we are armed with a better understanding of the actual velocity components of any moving particle, we can easiely derive Lorentz factor using the above diagram, by applying simple geometry!

VREAL2 = v2 + Vo2.... (1) by Pythagoras

VREAL/Vo = c/v .... from similar triangles

Vo = v*VREAL/c .... (2)

Substitiuting for Vo in equation 1

VREAL2 = v2 + (v2/c2)*VREAL2

VREAL2(1-v2/c2) = v2

VREAL = v/Ö(1-v2/c2)

VREAL = gv.... where g= 1/Ö(1-v2/c2)

This means that most mathematics derived by Einstein and Lorentz still hold true, but have a different meaning, a meaning which unlike time dilation and distance contraction, does make sense and can be easiely explained by a physical model of the particles' actual path of travel.

The REAL PATH of a moving object

helicalwaveFrom the above, it follows that travelling in a straight line is something which nature abhors, and a perfect straight line travel occurs only at v=0, or in calculus as dS/dt tend to zero. Also, refering again to our relativity velocity vector diagram, the resultant real velocity VREAL is made up of two normal vectors, one of which is v, which points in the same direction joining the source to destination. So, we know, that VREAL is really the resultant of two velocity vectors v and Vo which are normal to each other. This might not make much sense until you follow the helical path diagram which shows how such path must look to satisfy all the above conditions.

A helical path is one example in which the resultant velocity is made up of two velocity vectors v & Vo which are always normal to each other at any point in time. This is a path in which the ratio of VREAL to v is eqaul to Lorentz factor, resulting in a kinetic energy value which goes to infinity as the mean velocity v tends to c, but where no distances shrink, no time dilates and no mass goes to infinity! At low mean velocity v, much less than c, Vo the orbital velocity of the spiral will be very small, the path will resemble much to a straight line, and VREAL will be almost eqaul to v and to S/t. In such a case Newton's laws will give the correct results even if the path is approximated as a straight line, and angular velocity assumed null. As velocity increases, the orbiting speed will also increase and VREAL will increase to superluminal helical velocities, but the magnitude of the mean linear velocity to its destination will be still less than c. Applying Newton's laws on a straight path will no longer yield the correct results, because the angular velocity is no longer negligible. So, the correction factor g is only required if one totally ignores the angular motion. Once angular velocity is put into the equation, the correct kinetic energy is obtained.
This model does not exclude superluminal speeds, however it still has the speed of light limit within its linear part, the velocity which we measure by measuring the time it takes for a particle to travel from source to destination. This also clearly explains why we do not see photons along their travel. We see photons at the radiating source, and at destination, but since they are superluminal during their helical journey, they are not visible along their path! Also, it is kind of silly to assume that when a photon is released, its total KE is only made up of translational KE and totally ignore its rotational KE. In fact from the above it is obvious that a mass with zero angular KE is not a mass at all.


Unlocking the secrets of matter

From the particle section discussion, we know that matter (defined as having mass) is made up of standing waves. The below picture shows a simple form of matter made up of a helical standing wave. A pair of helical waves is all required to generate matter. All elementary particles should be of this form, yes, whether it is an electron, an atom, a quark or any other newly discovered particle it will be of this form. An electron is such example. The positron is exactly the same but goes backwards in time. All it means is that v,Vo and VREAL point to the opposite directions. The spin is simply v.


helix vectors

For the condition Vo/v= Ö2, or q=35.264 degrees, we get Vreal = cÖ2.
Applying Newton's law to find the total internal energy of a particle:

E= 1/2mv2
E= 1/2m [Ö(2)*c]2
E= 1/2m*2c2
E = mc2

So knowing about the helical path we can derive Einstein's equation directly from Newton's equation, not the other way round! More important is the fact that we can finally construct a physical model for ALL matter. Note that the above standing wave is made up of pure electromagnetic waves, and that the whole circular helix has the properties of matter. Now, if external energy (kinetic & rotational) is supplied to this helix, the whole structure will start moving in a helical path of greater dimensions. The grey entities moving around the bigger circular helix will thus be the original helices. The bigger helix will still have the properties of matter, but its standing wave will be home to a number up of smaller 'particles' which can be made to increase or decrease in quantity by kicking them with enough energy. This mechanism is the fundamental mechanism of nuclear theory. All smaller helices within one larger helix will have exactly the same properties and be similar in size and frequency. Each helix size will thus exist in a different heirarchy level, with the lowest level being the smallest helix that is made up of pure electromagnetic waves, that is with no internal circular helices. The relation between heirarchy levels is governed by the fine structure constant which actually defines the maximum speed limits for v and Vo at which lower stage helices can move around the main circular helix.


relativity

The figure above, is a much better scientific explanation of the origin of matter and what one would expect to get when bombarding matter in particle accelerators. It also solves the enigma of the point particles. Nobel laureate Paul Dirac, who developed much of the theory describing the quantum waves of the electron, was never satisfied with the point-particle electron because the Coulomb force required a mathematical correction termed renormalization. In 1937 he wrote, This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small not neglecting it because it is infinitely large and you do not want it! [P. A. M. Dirac, Nature, 174, 321, p 572 (1937)]. The below figures is basically what present books teach (not a joke).



The realities of mainstream science


So, we collide two cherries, and get pears, bananas, apples and all fruit varieties! Of course, with no physical model for matter, nuclear theory offers a lot of enigmas and surprises to scientists. Once we start sorting out matter on different heirarchy levels, everything starts to get more clear. For example, we can now say that all known atoms exist in the same heirarchy level. They differ only in the number of helical anti-nodes (protons), their v/Vo ratio (proton to neutron ratio), and the number of smaller helices moving around (electrons), but they are on the same hierarchy level. The fact that the number of protons is usually equal to the number of electrons indicates, that for a stable atom, each structure antinode can handle one lower hierarchy helix within it. Let's say these higher level structures are the cherries. Once we collide these two big helices into each other, some of the lower level(s) helices, get mechanically dislodged and since they are standing wave circular helices on their own, they will be detected as independent matter, say pears and bananas. So, why were pears and bananas not visible in the first place? Simply because their helical velocity is faster than the speed of light, and anything faster than the speed of light cannot be detected! The pears and bananas are no longer spiralling around the cherry structure at superluminal speeds, but have now been kicked off their orbit and travelling at a much lower velocity resulting after impact. If one splits the resulting pears, then apples may be detected and the process continues, with energy levels going up as lower heirarchy levels are approached. The process continues until plancks energy level is reached, at which point the resulting outcome of the bombardment will not be a standing wave (detected as matter) but pure travelling electromagnetic waves at Plancks frequency and energy, travelling at the speed of light.

Calculating speed limits for v and Vo.

ERydberg/2/EBohr = EBohr/ECompton = ECompton/Eclass = a= 1/137.036

From E=mc2, we can therefore get the relation in terms of masses:

MRydberg/2/MBohr = MBohr/MCompton = MCompton/MClassical = a= 1/137.036

This clearly shows that a is nothing but the mass or energy ratios of a circular helix standing wave to a similar helix of higher hierarchy level. If we take the lower hierarchy level helix as our 'stationary mass' Mo, we have:

MREAL = gMo.... where g= 1/Ö(1-v2/c2)

...but MREAL= 1/a Mo, which implies that for sequential hierarchy levels in matter, a = 1/g

a= 1/g = Ö(1-v2/c2)

1/137.036=Ö(1-v2/c2)

v=0.999973374c

Also, c a = Ö(c2-v2), that's why I have put c a in the relativity diagram on top of this page.
The fine structure angle q = ArcSin(a), so:

Fine structure angle q = ArcSin(1/136.036)= 0.418111 degrees. The real superluminal helical path velocity at which the internal hierarchy levels move within the structure is

VREAL= g v = v/a= 137.036v

VREAL= 137.036*0.999973374c= 137.032c

So, strictly speaking Einsteins equation E=mc2 is not exact since it assumes that the translational velocity v can reach c, whilst in fact it is limited to a maximum limit of 0.999973374c. So the exact equation for energy-mass equivalence is:

E= 0.999946748 mc2


Mass varies with its absolute velocity
..and together with gravitational constant G, over 50 other scientific units depend on stars position!



Final Demystification of the gravitational constant variation



For hundreds of years, great thinkers have thought about the substance of matter, and specifically to its property we call mass. It all started with the work of Isaac Newton , an English scientist and mathematician who lived between 1642-1727. He had one of the most brilliant minds the world has ever known. Legend has it that seeing an apple fall made Newton reflect about the laws behind gravity, the force which keeps us bound to the Earth. Being a good experimenter himself, it did not take him too long to work out the laws of gravity. In fact at the age of 44, he found out that the motion of the planets and the moon as well as that of the falling apple could be explained by one simple Law of Universal Gravitation, which states that any two objects attract each other with a force equal to the product of their masses M1 & M2, divided by the square of their distance apart R, times a constant of proportionality G.


F = G (M1 * M2)
     R2

Newton estimated this constant of proportionality, called the gravitational constant G, also referred to as the 'big G', from the gravitational acceleration of the falling apple and an approximate guess for the average density of the Earth. However, more than 100 years elapsed before G was first measured in the laboratory. It was in 1798 that Henry Cavendish and co-workers obtained a value for G of 6.67E-11 Newton.m2/kg2, accurate to about 1%. During his experiment, Cavendish claimed that he was "weighing the Earth", since, once G is known, the mass of the Earth can be obtained from the known gravitational acceleration on the Earth surface. In fact, knowing G enables us to find the mass of any rotating body, like for example the sun or moon by knowing their orbital radius and time for one complete cycle. Early in the 20th century, Albert Einstein developed his theory of gravity called General Relativity, in which the gravitational attraction is explained as a result of the curvature of space-time. This curvature is also proportional to G and is a constant. Planck has also defined G as one of his universal constants (G,h,c) and stated that these three constants are the same anywhere in the universe.

Controversial values of G

g-apparatus This is a photograph of a simple big G apparatus used to indirectly determine the value for G. The value of the fundamental constant G has been of great interest for physicists for over 300 years and it has the longest history of measurements after the speed of light. In spite of the central importance of the universal gravitational constant, it is the least well defined of all the fundamental constants. Despite our modern technology, almost all measurements of G have used variations of the classical torsion balance technique as engineered by Cavendish during the 17th century. The usual torsion balance basically consists of two masses connected by a horizontal rod suspended by a very thin fibre, referred to as the dumbbell. When two heavy attracting bodies are placed on opposite sides of the dumbbell, the dumbbell twists by a very small amount. The attracting bodies are then moved to the other side of the dumbbell and the dumbbell twists in the opposite direction. The magnitude of these twists is used to find G. Another common set-up variation to this technique, is to set the dumbbell into an oscillatory motion and measure the frequency of oscillation. The gravitational interaction between the dumbbell and the attracting bodies causes the oscillation frequency to change slightly when the attractors are moved to a different position and this frequency change determines G. This frequency shift method was used in the most precise measurement of G to date (reported in 1982) by Gabe Luther and William Towler from the National Bureau of Standards and the University of Virginia. Based on their measurement, CoData now lists G = 6.6742E-11Nm2/Kg2 and assigned a quite conservative uncertainty of 0.015%. Comparing this constant to other well known units of physics, the fractional uncertainty in G is still thousands of times larger. As a result, the mass of the Earth, the sun, the moon and all celestial bodies cannot be known to an accuracy greater than that of G, since all these quantities have been derived from the experimental G. The units of G are m3/Kg/sec2, so any error in the Kg unit will show up as an error in G. An uncertainty of 0.015% might seem quite small, but when applied to masses under consideration, for example earth's mass with a nominal mass of 5.972E24 Kg, it means that the actual mass could be higher by as much as 8.958E20 kg!, and that's why the mass of earth can only be given to three decimal places.


Variation evidence from readings spanning over 200 years

Data Set numberAuthorYearG (x10-11 m3Kg-1s-2)Accuracy% Deviation
from CODATA
1Cavendish H.17986.74±0.05+0.986
2Reich F.1838 6.63±0.06-0.662
3Baily F.18436.62±0.07-0.812
4Cornu A, Baille J.18736.63±0.017-0.662
5Jolly Ph.18786.46±0.11-3.209
6Wilsing J.18896.594±0.015-1.202
7Poynting J.H.18916.70±0.04+0.387
8Boys C.V.18956.658±0.007-0.243
9Eotvos R.18966.657±0.013-0.258
10Brayn C.A.18976.658±0.007-0.243
11Richarz F. & Krigar-Menzel O.18986.683±0.011+0.132
12Burgess G.K.19026.64±0.04-0.512
13Heyl P.R.19286.6721±0.0073-0.031
14Heyl P.R.19306.670±0.005-0.063
15Zaradnicek J.19336.66±0.04-0.213
16Heyl P.,Chrzanowski19426.673±0.003-0.018
17Rose R.D. et al.19696.674±0.004-0.003
18Facy L., Pontikis C.19726.6714±0.0006-0.042
19Renner Ya.19746.670±0.008-0.063
20Karagioz et al19756.668±0.002-0.093
21Luther et al19756.6699±0.0014-0.064
22Koldewyn W., Faller J.19766.57±0.17-1.561
23Sagitov M.U. et al19776.6745±0.0008+0.004
24Luther G., Towler W.19826.6726±0.0005-0.024
25Karagioz et al19856.6730±0.0005-0.018
26Dousse & Rheme19866.6722±0.0051-0.030
27Boer H. et al19876.667±0.0007-0.108
28Karagioz et al19866.6730±0.0003-0.018
29Karagioz et al19876.6730±0.0005-0.018
30Karagioz et al19886.6728±0.0003-0.021
31Karagioz et al19896.6729±0.0002-0.019
32Saulnier M.S., Frisch D.19896.65±0.09-0.363
33Karagioz et al19906.6730±0.00009-0.018
34Schurr et al19916.6613±0.0093-0.193
35Hubler et al19926.6737±0.0051-0.008
36Izmailov et al19926.6771±0.0004+0.043
37Michaelis et al19936.71540±0.00008+0.617
38Hubler et al19936.6698±0.0013-0.066
39Karagioz et al19936.6729±0.0002-0.019
40Walesch et al19946.6719±0.0008-0.035
41Fitzgerald & Armstrong19946.6746±0.001+0.006
42Hubler et al19946.6607±0.0032-0.202
43Hubler et al19946.6779±0.0063+0.055
44Karagioz et al19946.67285±0.00008-0.020
45Fitzgerald & Armstrong19956.6656±0.0009-0.129
46Karagioz et al19956.6729±0.0002-0.019
47Walesch et al19956.6685±0.0011-0.085
48Michaelis et al19966.7154±0.0008+0.617
49Karagioz et al19966.6729±0.0005-0.019
50Bagley & Luther19976.6740±0.0007-0.003
51Schurr, Nolting et al19976.6754±0.0014+0.018
52Luo et al19976.6699±0.0007-0.064
53Schwarz W. et al19986.6873±0.0094+0.196
54Kleinvoss et al19986.6735±0.0004-0.011
55Richman et al19986.683±0.011+0.132
56Luo et al19996.6699±0.0007-0.064
57Fitzgerald & Armstrong19996.6742±0.0007±0.01
58Richman S.J. et al19996.6830±0.0011+0.132
59Schurr, Noltting et al19996.6754±0.0015+0.018
60Gundlach & Merkowitz19996.67422±0.00009+0.0003
61Quinn et al20006.67559±0.00027+0.021
--PRESENT CODATA VALUE20046.6742±0.001±0.0150


The official CODATA value for G in 1986 was given as G= (6,67259±0.00085)x10-11 m3Kg-1s-2 and was based on the Luther and Towler determination in 1982. However, the value of G has been recently called into question by new measurements from respected research teams in Germany, New Zealand, and Russia in order to try to settle this issue. The new values using the best laboratory equipment to-date disagreed wildly to the point that many are doubting about the constancy of this parameter and some are even postulating entirely new forces to explain these gravitational anomalies. For example, in 1996, a team from the German Institute of Standards led by W. Michaelis obtained a value for G that is 0.6% higher than the accepted value; another group from the University of Wuppertal in Germany led by Hinrich Meyer found a value that is 0.06% lower, and in 1995, Mark Fitzgerald and collaborators at Measurement Standards Laboratory of New Zealand measured a value that is 0.13% lower. The Russian group found a curious space and time variation of G of up to +0.7%. In the early 1980s, Frank Stacey and his colleagues measured G in deep mines and bore holes in Australia. Their value was about 1% higher than currently accepted. In 1986 Ephrain Fischbach, at the University of Washington, Seattle, claimed that laboratory tests also showed a slight deviation from Newton's law of gravity, consistent with the Australian results. As it may be seen from the Cavendish conference data, the results of the major 7 groups may agree with each other only on the level 10-1%. So, despite our great technology advancements in measuring equipment, we are still very close to the precision of 1% obtained by Cavendish in the 17th century. This controversy has spurred several efforts to make a more reliable measurement of G, but till now we only got further conflicting results.


g-plot

More evidence

One such effort was that by J.P. Schwartz and J.E. Faller, who devised an experiment that uses gravity field of a one half metric ton source mass to perturb the trajectory of a free-falling mass. They used laser interferometry to track the falling object. This experiment does not suspend the test mass from a support system, and it therefore rules out many of the systematic errors associated with supports in Cavendish-like setups. Below are the results gathered over three years.

freefall method


This is a plot of G results using the mentioned free fall technique. Error bars represent one formal standard deviation. The 1997 data was processed daily, giving values of G from 6.66E-11 to 6.71E-11. One day's observation consisted of approximately 7200 drop measurements. Again, data consistently shows that G varies over time, with an uncertainty of over 1400ppm, despite the fact that all sources of possible experimental errors associated with the classical Cavendish setup, have been eliminated.

Just a couple of years ago, Mikhail Gershteyn, a visiting scientist at the MIT Plasma Science and Fusion Centre and his colleagues have successfully experimentally demonstrated that the well known force of gravity between two test bodies varies with their orientation in space, relative to a system of distant stars. Their remarkable finding has been also been issued on the journal 'Gravitation and Cosmology'. George Spagna, a chairman of the physics dept at Randolph-Macon College, argued that Mikhail and his colleagues must provide theoretical justification to be convincing.

...and some more

Variations in G are not only present in Cavendish experiments and free fall setups. They have been recorded by nature in several ways, which we can now interpret and use to find out the constraints of the variation of the gravitational constant over a long period of time. Astrophysical constraints on this variation have been obtained using various observation methods including lunar occultations and eclipses (Muller et al 1991), and planetary and lunar radar-ranging measurements (Shapiro 1990), helioseismology (Guenther et al 1998), primordial nucleosynthesis (Olive et al 1990), gravitational lensing (Krauss & White 1992), and white dwarf luminosity function (Garcia et al 1995). Determinations based on celestial mechanics provide the constraints on the variation of G of (dG/dt)/Go ≤ 10E-12/year. Other methods, such as those utilizing neutron star masses (Thorsett 1996), globular cluster ages (Degl'Innocenti et al 1995), and binary pulsar timings (Damour & Gundlach 1991) and helioseismology (Demarque et al). 1994, have yielded similar constraints on the long term averaged variation of the gravitational constant. Another way to determine the long term average change in G is by analysing the variation of planets' radii. The best limit comes from Mercury, which gave the limit on the variability of G, (dG/dt)/G ≤ 8E-12/year, which comes from the fact that the radius of Mercury has changed at most one km during the last 3000 to 4000 million years.

The collection of these new results suggests that the something is wrong or missing in our understanding of G. By the end of 1999, the international committee CODATA, decided to officially increase the uncertainty of the accepted value for the gravitational constant from 128 ppm to 1500 ppm. This remarkable step of increasing the uncertainty instead of decreasing was made to reflect the discrepancies between the mentioned experiments. In my theory of absolute velocity of matter, I will show that the variation within all these experimental results, not only IS NOT due to experimental error, but that the measure of the variation itself is of paramount importance to our understanding of physical laws, and indeed of the whole universe. Several physicists, among them Arthur Eddington and Paul Dirac, have speculated that at least some of the 'fundamental constants' may change with time. In particular, Dirac proposed that the universal gravitational constant, G, is related to the age of the universe T, with the relation Gmp2/hc~T-1. Then as the age varies, some constants or their combinations must vary as well. Atomic constants seemed to Dirac to be more stable, so he chose the variation of G as 1/T, in other words, the gravitational force weakening as the universe expands. In one of his lectures, Richard Feynman said "...the gravitation attraction relative to the electrical repulsion of two electrons is 0.24E-42... the ratio of time taken for light to travel across a proton to the age of the universe is 0.63E-42...this relation is not accidental (also known as Dirac's large number hypothesis), in which case the gravitational constant would be changing with time, because as the universe got older, the ratio of the age of the universe to the time which it takes for light to go across a proton would be gradually increasing." A few modern generalised theories of gravitation also admit or predict the variation of G with time. Revival of interest in the Brans-Dicke and alike theories, with variable G, was in fact motivated by the appearance of superstring theories where G is considered to be a dynamic quantity. The acceptance of a non zero variation in G, would of course require the revision or extension of general relativity, since this one assumes a constant G, which experimental evidence seems to consistently deny. The acceptance of a variable G, will of course set the dawn of a new physics.

Solving the controversy by better understanding of the nature of mass

The main reason leading to the above mentioned controversies comes from the fact that mass m is considered to be a static substance rather than a dynamic property of space, and despite our knowledge of relativistic mass M, research labs keep on applying the rest mass definition to the masses involved in their experiments. Also, unless you consider the spin levels which I'll soon introduce to you, there is no means to understand why the variations of the measured gravitational constant over a small period of time do not add up to a greater variation over a larger period. One can easily note for example that although measurements within one year vary about 0.7% of the mean value, the variation between the average values of each subsequent year is much smaller. This is because averaging the values cancels out the dynamic variations in mass. It is not a coincidence that ALL Cavendish and free fall setup experimental results fit within the same upper and lower boundary limits of 6.645E-11 to 6.715E-11, equivalent to -0.43% and +0.611% relative to the present (average) value from CODATA. What scientists seem to be missing is that G is oscillating within these two boundaries, and any experiment will give the value of G at the particular point in time of the oscillation. It means that all experimental data points shown above are in fact tracing an oscillating curve. Further on, we will analyse the origin of these oscillations. Those of you who followed my ST conversion table, could clearly see that mass is just a 3D structure of energy and that energy and motion have opposite (inverse) dimensions. Matter can be considered as different levels of spinning electromagnetic gyros embedded within each other. No wonder that inertial mass is defined as its resistance to motion! Varying the velocity of matter, will change the velocity of all these gyros with respect to a fixed point, and automatically vary its mass M. Mathematically it can be shown that relativistic mass has a velocity dependence, and the object will be more massive at higher velocities. The conventional invariant rest mass mo on the other hand, can be shown to be equivalent to a relativistic mass when one changes the frame of reference to the next higher level. The confusion arising from the two different terms, is due to the fact that rest mass, is considered as SOMETHING residing IN space time, which does not exist. Mass is formed of space time itself (=T3/S3 remember?). Statements like "The invariant mass of a particle is independent of velocity" make no sense. Every 'invariant mass' can be thought of a relativistic mass (a gyro) spinning inside a stationary box.


Why so much discrepancy in the measurement of G

Once you understand how all scientific parameters are interlinked, and how to manipulate standard deviation calculations, it becomes clear that the constant G should have the highest deviation, and that small changes in either of the two really fundamental units, space and time, will be amplified in any experimental measurements of G.
The relation between most scientific parameters has already been discussed in our ST system of units page. From here you can see that G is defined as S6/T5. Now we need to know how errors add up when multiplying or dividing parameters whose values are known to be in error by some non zero quantity. Such error or uncertainty factor over a parameter X, is usually quoted as relative standard uncertainty dX/X. But what happens if our quantity is related to X by Y=Xn? The error will increase according to the following equations:

In general, the total error dX/X of any computation X=A*B*C/D*E/F... is:

dX/X = sqrt {[dA/A]2 + [dB/B]2 + [dC/C]2 + [dD/D]2 + [dE/E]2 + [dF/F]2 + ....}

So, for our ST converted unit in the form G = Sx Ty, the total standard deviation is:

dG/G = sqrt {|x|* [dS/S]2 + |y|* [dT/T]2}

So it follows that dG/G = sqrt { 6*[dS/S]2 + 5*[dT/T]2 }

Whatever the actual standard deviation values for S and T are, it is quite obvious that the 'constant' G should have the highest standard deviation of all units due to its highest powers of dS/S and dT/T. It also means that we can monitor measurements of G to get a picture of the much smaller variations of the fundamental units S and T. The big chances are that the variations detected in 1887 by Michelson-Morley and in 1925 by Dayton Miller when checking for variations in the speed of light (the purpose of which was to display the movement of the Earth relatively to an aether) are the same variations on a smaller scale of those which present scientists are noticing in the experimental values of G. Just as a side note, it is less widely known that both Michelson's and Miller's results were not null. However Einstein's theories of SR and GR are only valid for an allegedly null result of their experiments which was denied by both Michelson and Miller. Most of the 200,000 individual readings from Dayton Miller's clearly show a systematic positive result, yet, Einstein prefered to discredit this data for temperature variations, for obvious personal reasons. Shown below are just two of Miller's data, with sinusoidal fitting curves by Maurice Allais.



nonlinear-response



What's the noise contribution in the measurement of G ?

solar flare As you can abserve from the above curve fitting, for reasons I will shortly explain, G varies in a quasi-sinusoidal manner, but we do have a secondary problem with measurement of G. It's the noise which gets superimposed on the natural oscillating variations of G. This rather random noise is caused by various stellar and intestellar processes. These deviations would correlate with different events taking place in our universe, such as the ones we observe from our own sun, including huge violent coronal mass ejections, electric and magnetic flux changes, which are randomly generated across the universe. It is known, that during coronal mass ejections (CME's) from our own sun, billions of tons of scathing plasma can be accelerated to millions of miles per hour. Solar flares are also other mechanisms which suddenly release huge amount of magnetic energy and radiation across virtually the entire electromagnetic spectrum. Such activities can easily result in noise levels exceeding the noise floor of the measuring equipment, or even the amplitude of the observable sinusoidal variation itself. Luckily enough, they are random, whilst the wanted variation is repetivite, so using digital filtering/curve fitting techniques, we can still filter out this noise to analyse our interesting variations in G....as long as we do not discard all variations in our readings as experimental error!

Newton's Wrong assumption

For over 200 years the equations of motion as stated by Newton were taken as final, and seemed to describe nature quite accurately. However, discrepancies were found later on, which required the intervention of Einstein in 1905 to tweak all Newton's laws of motion. The tweak came at the expense of the acceptance that MASS VARIES WITH VELOCITY and this is clearly shown in Lorentz work:


M = g m0 .... where g is Lorentz factor = 1/Ö(1-v2/c2)

M = Effective mass
g = Lorentz factor
m0 = Stationary mass
v = velocity of mass relative to the observer's reference frame
c = speed of light


From his observations, Kepler was able to state his third law, which is the most important of all, in our context. The third law relates the period of a planet's orbit, T, to the length of its semimajor axis, a. It states that the square of the sidereal period of the orbit (T2) is proportional to the cube of the semimajor axis (a3), and further that the constant of proportionality is independent of the individual planets; in other words, each and every planet has the same constant of proportionality K:


K = a3/T2 .... K= 3.35E18 m3/s2

kepler constant

Since Newton wrongly assumed that Mass is a constant, he had later on 'hidden' Kepler's constant within his Gravitational constant G. Newton's G and Kepler's constant K, are related through:



K = GMS/4 π2.....MS = Solar system mass

So here we can see that planetary motion shows that it's not G or M that are constant, but their product GM, and that G will only be a constant as long as Mass is constant. In fact, today we call the GM product as the standard gravitational parameter µ. It not only simplifies various gravity-related formulas, but also gives more accuracy to the results, than if one uses separate values of G and M. In fact, the product of G and the mass of the Sun is known much more accurately than either quantity alone! So, at relativistic speeds, we have to account for the relativistic mass and we have:

From Kepler's constant and the standard gravitational parameter, we know that GM is conserved for different planetary velocities:

µ = GM = (G+DG) * (M+DM)

(G+DG) a 1/(M+DM)

DM = Dg mo and DG = D(1/g) G ..... where g= Lorentz factor, mo = rest mass

 This relationship shows that any change in mass will be reflected in a change in G
and that both Mass and G are functions of velocity

This conclusion can also be easily deduced, from the fact that the dimensions of gravitational 'constant' G has MLT dimensions L3M-1T-2, showing the inverse relation between G and Mass.


For a body oscillating between two relativistic velocities VMAX and VMIN, both expressed as ratios of c, the Lorentz contraction factor variation is given by:


gMAX/gMIN = {1/Ö(1-VMAX2)} / {1/Ö(1-VMIN2)}

gMAX/gMIN = MMAX/MMIN = GMIN/GMAX... where GMAX-GMIN represents the deviation in G.

In other words, the experimental variation in G, is a MEASURE of the variation of the absolute velocity of the test masses at that point in space and time, and not an error at all! This should come to no surprise, after all we know that Einstein already tweaked Newton's laws of motion by replacing the rest mass with relativistic mass. At first, it was a shocking fact to accept that Newton's laws were wrong, after 200 years of general acceptance. The fact is that the relativistic effects on the experiments which had validated Newton's laws were smaller than the accuracy of the experimental error, and so the small but important change in mass was neglected. Einstein's tweak to Newtons equations of motion was simply to replace Newton's constant mass by the velocity dependent mass.

effectivemass


For velocities much lower than the speed of light, Einstein's tweak has no effect upon Newton's original equations, but this is not a good enough reason, for present physics textbooks to still quote Newton's laws of motion without any hint of that mass is in fact a function of velocity. In fact, it is not before reaching advanced levels that the student is first exposed to Einstein's relativistic mass.

We are used to the fact that a 1kg steel ball will always 'contain' 1kg of matter, and that when we put it on a measuring balance on earth, we will always read 9.8 Newtons of weight. Yet, Einstein showed this is false. As it has been shown in both my ST conversion table and in Einstein's equations, the effective mass of an object (the opposition to motion) will increase with the velocity of the object, and the effective mass is not something virtual. As long as a 1kg steel ball is moving at a velocity enough to reach an effective mass of 10kg, the steel ball will have all the properties of a 10kg steel ball, no more, no less, and the 1kg mass becomes history! The first confirmation of measured increase in mass came in 1908, measuring the mass of the fast electrons in a vacuum tube. In fact, TV designers work out their calculations assuming an electron mass of 0.5% heavier than its so called 'rest mass' when calculating the magnetic fields used to deflect them across the screen.

nonlinear-response


At relativistic speeds, the effective mass will increase with velocity. As you can see from the graph, the Lorentz factor is not linear, with its gradient increasing further as the velocity approaches the speed of light. This means that if a mass is moving at relativistic velocity, and a similar mass is moving at twice its velocity, the mass of the fastest object will be more than twice the mass of the slower one. Also, if a mass moving at relativistic velocity, varies its velocity sinusoidaly, the variation in mass will not be perfectly sinusoidal, but distorted towards the positive velocity variation. The above plot shows multiple curves (in blue) plotting 1/g for an object travelling at different velocities v ± 30km/sec where 0<v<c, that is moving forward at relativistic velocity with non relativistic sinusoidal velocity variations. Due to the non linear curve for g, the perfect sinusoidal velocity variation will eventually result into a distorted sine wave representing the actual Lorentz factor variation. Since the plot is for 1/g, it also represents the variation of G for such an object.
In the following paragraph, we will see that on earth (were most experiments are done), no object is really at rest, and that the relativistic mass has to be considered even for a steel ball sitting motionless on a table. The only thing which is in fact at rest in the whole (closed) universe is its boundary, or its reference frame beyond which no matter can exist.

How fast is Earth going

relativity+newton For us who live on this planet, it looks as if our planet is stationary. In fact, a long time ago, it was believed that the sun and stars all revolved around the fixed earth, and that the earth was at the centre of the universe. We now know, that our Earth is just a tiny planet residing in a huge universe containing multiple galaxies of thousands of solar systems.

We know that our planet spins on its axis at one cycle every day. The solar system in turn, spins at one cycle every year. We normally refer to solar system spin as planet orbit motion, but in fact even the sun is known to be spinning, so it is more correct to call it solar system spin. Our whole solar system is thus spinning on its own axis while orbiting around our Sagittarius Dwarf galaxy (not the Milky Way galaxy) at one cycle every approximately 226 million years, and it's highly probable that other galaxies spin around as well, and this hierarchy goes on for five levels. All this happens within a closed fixed frame universe. So, saying that something is at rest means only that it is traveling at the same velocity as the observer and not at rest in relation to the universe frame of reference. So, your PC, your desk, your room are all traveling through space at the same speed as you are, and the velocity at which you are traveling right now is far greater than you would ever expect. The table below shows the currently accepted velocities for the known universe.



   •How fast is the Earth spinning? 0.46 km/sec
   •How fast is the Solar system spinning? 30 km/sec
   •How fast is the Galaxy spinning? 250 km/sec
   •How fast is our super cluster spinning? 627 km/sec
   •How fast is the CMBR frame spinning? Assumed at rest

So, when all these velocities happen to line up, we will have an absolute velocity of 907.46 km/sec or 0.3% the speed of light when 'stationary'!



Introducing Macro Spin Levels and the Relativistic Universe model


universe fractal


To understand mass, we need a fixed reference frame, or at least a frame of reference one hierarchy level higher than that in which the mass seems at rest. Normally, we are used to take earth as our frame of reference, but we know that other bodies exist outside such a frame. According Mach's Principle, "The inertia of any system is the result of the interaction of that system and the rest of the universe. In other words, every particle in the universe ultimately has an effect on every other particle." Everything is connected in such a way, in both space and time, that if one thing changes, everything would change. Thus, when considering any reference frame, other than the one containing the whole universe, you are automatically ignoring an active part of your physical system, which will result in an incomplete system. Everything is connected, or as we say, is gravity bound, so in order to work out motion of a particle anywhere in the universe, one must take into account the whole universe. The whole universe is a single body of oscillators in steady state. No one can claim that his results are accurate enough, if done within any reference frame of his choice. So, we have to adopt the universe highest hierarchy frame of reference and start analysing the motion of the object in question relative to that frame. The velocity relative to such a frame is called the absolute velocity, and is the only frame of reference in which absolute and relative become equal. From this frame, the motion of earth, together with all objects on its surface, consists of multiple spin within spin mechanisms, or convoluted spins. That is, the absolute resultant velocity at any point on earth is made up of the tangential velocity of the earth's spin, vectorially added to the tangential velocity of the planet's orbit around the sun, added to the tangential velocity of the solar system and so on. We found the fractal diagram pictured above to be very helpful in understanding what the universe might look like and how it behaves. We will call these spins 'Macro Spin Levels', with Macro Spin Level 1 being the earth's spin about its own axis, at a velocity of about 0.46km/sec (kps) and completing a full cycle every day. The next level, Macro Spin Level 2, will be the solar system spin about its axis, which we note as the planets' orbit of the (rotating) sun. For Earth, this velocity is about 30kps and completes a full cycle every one year. Macro Spin Level 3 is the galaxy spin about its central black hole Sagittarius A*, the motion of which we notice the existence as our solar system travels in a further bigger orbit around our galaxy. This velocity is about 270kps and completes a full cycle every about 226 million years. We know about the existence of other spinning galaxies, which together as clusters in a similar fashion like the spinning planets orbiting the sun, are orbiting yet around another centre of rotation, the centre of the Virgo super cluster. This would be the Macro Spin Level 4. Virgo super cluster centre is known as the great attractor and could possibly have its own orbital velocity as well, that would be orbiting around yet another centre, together with several other 'attractors'. This is Macro Level 5, and its centre is fixed. The highest level we got data for, is that of level 4, the velocity of the centre of our galaxy with respect to the Great attractor, measured as 627kps. This is the velocity as measured by doppler effect with respect to the cosmic background radiation. In the following calculations I will conventionally assume the Great attractor to be stationary, and that Spin level 5 does not exist. We will refer to it further on, in this page. One important thing to notice is that each hierarchy level is cyclic, and that the increase in velocity in its first half cycle, will be balanced by an equal decrease in velocity on its completion of the cycle. This concept shows that the whole universe is cyclic, and that the present expansion of the universe is part of a huge time cycle which expands and contracts the whole universe. This eliminates the requirement of the Big Bang model as a means to understand the present expansion state.

Macro Spin Level 3

galaxy orbit
At time of writing, it is generally thought that all galaxy clusters are rotating about what is normally referred to as the Great Attractor. This great attractor is assumed by most, to be fixed in space that it can be taken as the fixed reference in the universe. As you see in my universe hierarchy diagram, and as highly debated within astronomers and scientists, we lack much data and knowledge to assume such thing, and the great attractor is probably orbiting around, with other great attractors around the real fixed centre of the universe. For the pre-eliminary calculations we shall abide to the conventional idea that the great attractor is fixed, and start from Spin level 4 which is the orbital velocity of the galaxy about the great attractor.
From astronomy we know some interesting data about Macro Spin Level 3. At this level our whole galaxy is spinning about its own centre at the velocity of 250kps and orbiting along the universe at Spin level 4 velocity of about 627 kps relative to the Great Attractor. The value of 627 kps is equal to 0.21% the speed of light and has been measured from the cosmic microwave background radiation, under the assumption that the speed of light is constant across the whole universe.
Let's work out the change in mass that would result from a change of + 250kps to -250kps at the galaxy velocity of 627kps, assuming the great attractor at rest.

From g= 1/Ö(1-v2/c2)

At v=627+250kps or 877kps, we get gMAX= 1.000004279
At v=627-250kps or 377kps, we get gMIN= 1.000000791
So the variation Dg from its minimum to maximum value is 3.488E-6
Applying this g factor to a mass of MMIN1000Kg at any place in our solar system, DM (in grammes)= Dg*1000*1000 = 3.488 grammes

So, a nominal mass of 1000kg would vary its mass in a cyclic way by 3.488 grammes every 226 million years, the time taken for the solar system to make a complete revolution around the galaxy. This percentage change in mass will take effect over the whole of the galaxy, and even though the percentage may seem small, the change in global mass will be quite huge considering the total mass of the whole galaxy. It will thus oscillate the gravitational force between all stars within the galaxy, and also between their components at lower macro spin levels. Thus, spin level 3 variation is the only variation that will show up when averaging data values over one year. Our value will thus show as a long term variation of (dG/dt)/G = 3.488E-6/112E6 = 0.031E-12/year.


Macro Spin Level 2

Now let's consider level 2, the orbital spin of the earth and other planets around the sun. Actually it's not the planets that are spinning around the sun, but the entire solar system, including the sun is rotating on its own axis. In astronomy, the "ecliptic plane" is by definition, the 2D plane in space defined by the sun at its centre, and by the orbit of the earth, as shown below. The 12 zodiac constellations are all on the ecliptic plane. Let us assume that an observer outside our solar system is observing the motion from a fixed point in space on the same plane as the ecliptic plane. If he could measure the velocities of the sun and earth, he would note that the sun is moving at a constant 250 kps around the centre of the galaxy, but he would also note that the earth is not moving at a constant velocity. At times the earth is moving at 220 kps, at times it is moving at 280 kps (because it is going in circles around the sun), and at most times it is accelerating or decelerating between these two limits. The earth will seem to be racing with the sun around the galaxy. At times the earth would be slightly in front of the sun and at other times it would be slightly behind the sun. At times it would be moving faster than the sun as it comes in front, and at times it would be moving slower as it goes behind. This motion is describing the absolute velocity of Macro Spin Level 2, with the Great Attractor taken as the fixed reference.

Macro Spin level 2

Our solar system's velocity around the centre of our galaxy is known to be approximately 250kps, and earth's orbital velocity is about 30kps. This means that the earth's velocity with respect to the galaxy will vary from 220 kps to 280 kps, depending on the orbital position of the earth, relative to our joint path towards the centre of the galaxy, that is depending on the month of the year. To measure the maximum absolute velocity limits for Spin Level 2, we must also consider the velocity of the higher level 3, at which the centre of the galaxy is moving with respect to the fixed reference frame, and so we must add 627kps to the solar system velocity.


Calculation of mass variation for Macro Spin Level 2

This velocity will be a sinusoidal variation oscillating from +30kps to -30kps about the absolute solar system velocity of 250kps+627kps = 877kps, that is an oscillation with peak to peak velocity variation of 60kps.

This means that any object on earth, including earth itself is moving at a velocity of 997kps, which varies ±30kps, or a total velocity variation of 60kps every year cycle.

So, let us work out the change in mass that would result from a change of 60kps at the absolute solar system velocity of 877kps.

From g= 1/Ö(1-v2/c2)

At 877+30kps or 907kps, we get gMAX= 1.000004577
At 877-30kps or 847kps, we get gMIN= 1.000003992
So the variation g from its minimum to maximum value is 5.85E-7
Applying this g factor to a mass of 1000Kg at any place on earth, DM= Dg*1000 = 0.585 grammes

So, a nominal mass of 1000kg would vary its mass in a cyclic way by 0.585 grammes every 6 months, returning back to its original mass on the next 6 months.

Spin level 2 does not vary with the location of the object on earth, since the velocity variation is taking place over all matter on earth, and thus can be applied to the whole earth's mass.
Thus applying this g factor to Earth's average mass, DM= Dg*Me= 5.85E-7*5.972E24Kg
Earth's mass will increase by 3.49E18Kg over a period of 6 months and loose 3.49E18Kg over the following 6 months. All consequences of Macro Spin Level 2 will thus show and repeat themselves yearly. One of such obvious consequences of this change in earth's mass is that the gravitational force of attraction between it and the sun will have a minimum and a maximum value separated by 6 months time. The minimum distance between the Earth and sun will be that position in which the earth is moving at 280kps due to the increase in earth's effective mass, whilst the maximum radial distance from the sun will occur when the earth is moving at 220kps (refer to the above diagram). This implies that the orbit around the sun cannot have a uniform centripetal force, and its orbit will be distorted into an elliptical one, and this we know for fact, or was it an other enigma?

Already at Macro Spin Level 2, at a velocity variation of 60kps, the motion of the earth around the sun DOES make a dangerously worrying difference on the masses involved. According to the above calculation, it will in fact vary in a quasi-sinusoidal way, the earth's mass by 3.49E18Kg (earth's mass=5.972E24kg). And this is assuming that the cosmic radiation background is the reference frame, otherwise all the calculated values can be much higher. Also note, that if one averages Spin level 2 variations over a year, they would virtually cancel out, since the increase in G in the first 6 months is canceled by the decrease in the following months of the year. This is the reason for which the experimental variations between experiments done during the year, do not eventually add up in the total variation over the whole year.



Macro Spin Level 1

We know a lot of data on this level. The tangential velocity of spin level 1, can be easily calculated knowing the radius of earth and the time it takes for one complete spin (one sidereal day).

Calculation of mass variation for Macro Spin Level 1

Equatorial Earth's diameter : 12757km
Time for complete spin about its axis: 23hrs 56mins 4sec = 86164 seconds
Equatorial perimeter = Pi * 12757 = 40077.29km
Tangential velocity = 40077.29/86164 = 0.465 km/sec or kps
Earth's orbital velocity around the sun (spin level 2) is known to be 30kps.

So considering spin about its axis on its own, any 'stationary' object in the equatorial region on earth would be moving at 0.465 kps or 465 m/s, already at supersonic speeds! If one had to be in the position of the sun, looking at the earth, he would see the earth's motion as in the animated diagram above. Any point on the equator will seem to be oscillating from left to right and back, varying its velocity from zero at the left to +0.465kps at the centre (at noon), to zero at the right, then to -0.465kps when moving on the other side (at midnight), and back to zero after a complete cycle. He will also see the earth moving to the left at its nominal 30kps. The velocity of earth as seen from the sun would be 30kps ±0.465kps. Thus the change in velocity modulated over the 30kps speed is equal to 0.93kps within every half cycle.

From g= 1/Ö(1-v2/c2)

At 627+250+30+0.465kps, we get gMAX= 1.000004582
At 627+250+30-0.465kps we get gMIN= 1.000004572

So the variation Dg from its minimum to maximum value is 1E-8.
For Macro Spin Level 1, such a change in effective mass varies upon the location of the object on earth. Objects at the poles will not be effected by such a variation.
Applying this g factor to a mass of 1000Kg at the equator, DM (in grammes)= Dg*1000*1000 = 10 milli grammes

So, a nominal mass of 1000kg at the equator would vary its mass in a cyclic way by 10mg every 12 hours. I have also shown that the mass variation for Macro Spin Level 1 depends on the actual location on earth, and that is has minor effect at the earth's poles. Also, the consequences of change occurring within this spin level will repeat themselves every 24 hours.


Natural consequences of change in mass

Given the knowledge of such mechanism, even if lacking a comprehensive set of accurate data to complete exact mass variations calculations at all spin levels, and knowledge of a possible 4th spin level, we can still evaluate the consequences of the spin levels treated here. For Spin level 3, the mass variation will be due to the rotation of the galaxy about its own axis. This mass variation will oscillate periodically every about 226 million years, the time for the solar system to make one complete cycle around our galaxy. Such a mass variation would be even greater than that of level 2, and would literally oscillate the mass values of all bodies within the solar system, including the sun. This means that as a consequence of the 3rd Macro Spin Level, all the planets' orbits will decrease their radial distances from the sun and get very close to each other for the first 112 million years, and then do the opposite on the next 112 million years. We currently have evidence that the earth, and all planets are in fact getting closer to the sun, and now we know why.

Another consequence of such big variation in mass of all objects within the solar system, is that while the planets themselves increase in mass, gravity can possibly crush them into higher density planets. Bigger animals will have less chance to survive as their bodies collapse due to their weight, and animals start getting smaller. In the case where the value of G changes abruptly, only the small 'versions' survive. Scientists are now convinced that what we refer to as birds, are in fact the survivors of the small scale dinasours. This can also explain a lot of known history of unsolved evolution facts. When on the next 112 million year cycle, mass starts to diminish again, Earth's density will decrease, possibly Earth itself would expand in radius, explaining why continents' coastlines are almost a perfect fit to each other, and could once cover the whole surface of a smaller earth. Animals grow taller and bigger as their muscles would be able to lift bigger bodies, and for us humans, building up temples with huge rocks, without any impossible machinery, would be like playing with blocks! Does this solve another mystery?


dinosaurs
Dinosaurs would be crushed by their own weight under our present gravitational force

Diplodocus weighed about 35 tons, but the limited strength of biological tissues does not make it theoretically possible for such a creature to support its own weight. The flying dinasour Pteranodon had a wing span of 8 metres, which is theoretically inadequate for lifting off such a large animal. A substantially lower value for the G constant, would easily explain these anomalies.


Kalasasaya temple
Kalasasaya Temple at Tiahuanaco - Its size and weight clearly indicate that gravitational forces were much lower during its time. The vertical coloums are 12 foot long.
Human beings could handle heavier and bigger building blocks with no problem



Ollantaytambo
Wall of Gigantic blocks on the summit of Ollantaytambo. The huge polished, jewel hard pink porphyry blocks are in the range of 200 tonnes each, and had been brought to an altitude of 60 metres from a quarry located 8km away and 900m higher on the opposite side of the mountain!!. No way our present human body, with all our technology can ever achieve such a task at the present high value of G.


human giant
Giant human bones were discovered in the mountain valley area of Turkey. One person's hind leg from the bone fossils measures 120cm in length, indicating a body length of about 5 meters.


Nazca plain
Photo of monkey over Nazca plain (18m in diameter). Giant humans did not need to work out complex and precise geometry to draft huge scale pictures over the ground. It makes no sense to go into all the geometry trouble to draw a stupid monkey. All they did was to draft their picture on the ground like a child drawing his picture on paper. Today, the photo is taken from a plane, but in those days, a human had just to stand upright!


Small human
Photo of a miniature human skeleton(left) Homo floresiensis compared to a present human skeleton(right) Homo sapiens. This adult skull was about the size of a 3 year old modern human child. Homo floresiensis lived when the value of G was at its peak. It is estimated, that this skull belongs to a 30 year old female, 1m long, weighing just about 25kg and lived about 18000 years ago. Next to the same sediment deposits bones of dwarf elephants were also found. Since we have past records showing that G was sometimes larger and sometimes smaller than our present value, it is evident that the value of G is not following a linear trend, but oscillating.


human foot
Giant human footprint contemporary with other dinosaur footprints taken from Paluxy River, Texas. It exceeds 45cm in length. Studies have revealed that it was of a 10 foot female weighing about 454kg.


What if the Great Attractor is not fixed in space?

In the above calculations, I showed that various changes in mass occur in different locations in space at different times. Even if all the numbers are wrong, you always end up with non zero variations in mass, of different periodic cycles superimposed on each other. One could in fact use the few past records we've got of maximum 'error' in experimental G to determine the actual velocity deviations of our solar system with respect to the universe reference frame within the past. Since the history of records for values of G does not span more than a few years, the variations due to Spin level 3, which take 224 million years for a whole cycle, will be negligible. However, Spin level 2 would show up the whole deviation in a year's time, and is the main Spin level generating the error in measurement. We have seen that macro spin level 2 velocity variation, would result in Dg=5.85E-7 or a deviation of 0.00006% in mass, which will be reflected in the experimentally derived value for G. The error of 0.0128% quoted by Codata is still about 213 times as much as this deviation, and this could simply mean that the Great attractor, assumed fixed in space, is in motion as well, in which case its velocity has to be added to all Spin levels. If we account for a higher spin level, ie. spin level 4, the absolute velocity has to be stepped up by the velocity of this spin level, and all mass variations in the lower spin levels will thus be offset to higher values.

If one assumes that the average of 0.7% experimental error in G is all due to the fact that the experiments have been done in different months of the year, we can equate the mass variation of spin level 2 to 0.7% to find the approximate value for the Great Attractor orbital velocity (GAV) around the centre of the universe.

gMAX/gMIN= {1/Ö(1-v2max/c2)}/{ 1/Ö(1-v2min/c2)}

100.7% = {1/Ö(1-v2max/c2)}/{ 1/Ö(1-v2min/c2)}

Vmax= 907 + GAV, Vmin = 847 + GAV

1.007 = {1/Ö(1-(907+GAV)2/c2)} / { 1/Ö(1-(847+GAV)2/c2)}..... c=299792.458kps

GAV ~ 294644km/sec or 0.9828c

It makes sense that after all, the GAV, which sits in the cosmic microwave background radiation rest frame, is a source of electromagnetic waves, which we know by definition have to travel at velocity c. The 0.9828 factor then arises from the tilt of the axis of each hierarchy level with respect to the direction of this energetic microwave source. This factor would correspond to cos-1(0.98)= 10.6 degrees.

Now re-working the mass variations for all spin levels including CMBR source velocity:

Spin Level 1:

At 294644+627+250+30+0.465kps, we get gMAX= 5.96627
At 294644+627+250+30-0.465kps we get gMIN= 5.96562

So the variation Dg from its minimum to maximum value is ~0.011% every 12 hours.


Spin Level 2:

At 294644+627+250+30kps , we get g= 5.9659
At 294644+627+250-30kps , we get g= 5.9245
So the variation Dg from its minimum to maximum value is ~0.7% every 6 months


Spin Level 3:

At 294644+627+250kps , we get g= 5.945108
At 294644+627-250kps , we get g= 5.627361
So the variation Dg from its minimum to maximum value is ~5.67% every 112 million years


This implies that if one performs the Cavendish experiment with the same setup at a time separation of 12 hours, he will get 0.011% difference from his previous reading. He may also get an error of 0.7% if re-done within 6 months, and theoretically he should get an error (or better, a deviation) of 5.67% if re-done after 112 million years(!).

Science Horoscopy and scientific consequences of mass variation

spiral galaxy
Measuring G with the false assumption of mass conservation would be better defined as science horoscopy and no matter how accurate the experiment is, will always give different readings at different locations on earth, at different times of the day, year and at different locations of our solar system within the fixed frame of the whole universe. Here you will see that a LOT of scientific parameters DO change with the positions of the stars. Understanding this point is of primary importance if we have to ever settle down for physics constants that are really constant and be accurately specified. Perhaps, the scientific consequences of variation in mass and resulting value of varying G, are more drastic then the natural consequences. Knowing the exact velocity of earth at any point in time with respect to the universe fixed frame of reference, would enable us to know the EXACT values of all physics constants at any location and at any point in time in both past and future. The value of G at any particular space and time location on earth can in itself be predicted by knowing the position and velocity at that point in time, of earth. The reason for different laboratories to come up with G values which differ so wildly from each others, is simply because the masses of both test masses and that of earth are varying due to variations in velocities of the earth at different times of the experiment. These variations can be worked out using Einstein's effective mass equations upon the velocity variation we found to exist in all Macro Spin Levels. No one else seems to have been thinking about applying Einstein's theory because all researchers assumed that their laboratories are stationary and that the masses involved in their experiments are equivalent to rest masses. As we have seen, all matter around us is traveling at relativistic speeds, and applying Newton's law of gravitation to varying masses, will obviously give varying results for G. Indeed, Einstein is at fault as well, as he should not have defined anything as 'rest mass' by choosing any reference frame other than the fixed frame of the closed universe. 'Rest mass' should not mean a mass which is stationary with respect to the observer but to the universe and in fact it can only be used to refer to the universe fixed frame of reference itself, the universe 'shell' if you like. The very basis of special and general relativity theories rest on a triple shaking basis: the reputedly negative results of Michelson's experiments, which were reconfirmed to be positive by several scientists, the invariance of the speed of light in all directions, and the impossibility to detect absolute motion, all of which have been confirmed NOT true by various other experiments.

Gravitational constant G is always measured indirectly, with the false assumption that the masses (both of the equipment and that of earth) are constant. But we now know that this is not true. Kepler's constant is the mass independent constant which relates to product G*M. A change in velocity will thus always result in a change in mass, and in G. If one does not take into account the variations in Earth's mass and the dumbbell's masses, then for sure, the value of the MEASURED G will vary with the time of year in which the experiment is done. Since G is a function of the reciprocal of mass (see dimensions of G), the experimental values for it will vary because the mass property is varying with the relative velocity of earth to the fixed reference frame at that particular place within the universe.

As a matter of fact, G is not the only measured unit that suffers such variations, though as we discussed it is in fact the best candidate to detect any variations. The consequences of this finding, which is a direct consequence of the ST conversion clean-up, are quite ground shaking, considering that ALL parameters have to be accepted as varying with star positions and time, and these include the speed of light and all those SI units which we have already explored analysing the present SI system of units.


The consequences of such a variation are just overwhelming! Mass is not conserved and together with G, and most scientific units, it depends on star positions. All the units above depend on the absolute velocity. The law of isotropy, the well accepted feature of the universe which states that a body's physical properties are independent of its location and orientation in space, simply breaks down. We can no longer just ignore the term absolute velocity. The idea proposed by Ernest Mach to Einstein, in which he stated that forces on bodies may vary relative to the orientation of distant stars proves itself to be perfectly justified. Einstein has in fact used this principle which he himself coined as Mach's Principle to reach his final laws of general relativity, but later on just dropped off this most interesting part. The fact that we are unable to sense absolute motion is a result of lacking a human sense for such thing (which would otherwise make us crazy, given the speeds at which we are traveling). We know that traveling in an aircraft at a uniform supersonic speed, does not feel any different from sitting down in your office, unless you look out of the window. It was not until James Clerk Maxwell's theory of electrodynamics was developed, that there showed up physical laws that suggested that one could measure his velocity without any reference to outside his reference frame, or to use our example, without looking out of the window. Unfortunately, during those days, all experiments could not show this is true, and in physics, anything that cannot be detected by experiment makes no sense to be defined. But, we cannot say that an experiment which consistently shows variations in the order of 0.0125% in Gravitational constant or mass, is not detecting anything! One cannot just ignore an idea or path of thought because it feels weird. As long as there is an experiment to support the idea, that idea can no longer be ignored. Time has come to accept Mach's and Maxwell's ideas. One has also to keep in mind that these were the pioneers behind Einstein's work. It is well known that most of Einstein's credit goes for making public, the ideas that these and other pioneers of his time had already known or derived for some time. For the first time, science will be able to define an object standing still, from an object traveling at uniform velocity. It will be also able to define an accelerating object from one being under the effect of gravity. In other words, science will be able to better describe reality.

Just think about how ridiculous is that NIST 1kg prototype sitting at the International Bureau of weights and measures, which is cycling it's own mass in sinusoidal fashion whilst encapsulated and 'stationary' under that glass jar! NIST has now to define the 1Kg something like: "This prototype shall henceforth be considered to be the unit of mass ...but measured when Leo, earth and the sun line up, on the first year that our solar system returns from its 226 million year orbital journey..". There is no guarantee that its actual mass might not vary wildly from 1kg along the journey of our solar system within the universe.

Now, if one compares the universe structure presented here, to the atomic structure we find a surprising similarity, not only a hierarchical similarity but also new numerical evidence showing that the universe is a set of spinning levels, with the ultimate source being a pure electromagnetic source. Starting off from the earth's rotation and diameter, we found that Spin level 1 has a velocity of 465 m/s.

Now using the same relation already found at atomic levels, that is 1/2Alpha, we can find the velocity for spin level 2:

Spin level 2 velocity = 462.9 * 137.036/2 = 31716.98 m/s
Spin level 3 velocity = 31716.98 * 137.036/2 = 2173184.18 m/s = 0.007c
... Now similarly to the atomic version, we will not apply the 1/2 factor to the last spin level 4: Spin level 4 velocity = 2183043.09 * 137.036 = 297804468 m/s = 0.993c

Adding up the 4 spin level velocities 0.993c + 0.007c + 0.000c + 0.000c = c

This re confirms that Spin level 4 must exist and its velocity is very close to the speed of light. Remember, the value we estimated previously for GAV gave us a value of 0.9828c, and this was based on the past records of deviation in G over a couple of decades. Also, given that no matter can travel faster than the speed of light, and that the observable universe is composed of matter, then, it follows that the sum of all spin level velocities cannot exceed c, and so anything trying to spin or travel faster than c, will start to have its axis tilted in a way that its velocity component in the direction of GAV never adds up to more than c. This would explain the earth's tilt to its orbit around the sun, and the solar system tilt along the galactic plane. These tilts vary according the the spin velocities... the higher the spins the greater the tilt. Earth's tilt results in seasons and is know to change from 21 to 25 degrees. The cause of this tilt has been long pondered upon by many scientists, and will remain an enigma, until the presented relativistic universe model is considered.

The simple and straight forward theory presented on this page, together with the high degree of mismatch obtained by independent labs upon the experimental value of G, Mach's original arguments which led Einstein to draw up his most famous theory, Mikhail's experimental results, together with the high precision of the GM product, must be more than enough to convince any one with basic physics knowledge that mass and the majority of scientific parameters DO vary with the actual location and orientation of earth with respect to the surrounding universe. There is a small difference however for the actual reason of why this happens, from the way Mach & Michail described the dependency. The dependency of the scientific units varies with the positions of the stars, in an indirect way, because a difference in the location of the stars IMPLIES a difference in our absolute velocity with respect to the universe fixed frame of reference. In other words, knowing our position relative to the stars, is the same as knowing our absolute velocity.

It would not be the first time in history, that even those ideas which seem to have been accurately verified might be wrong, and that in our present physical laws, everything could be wrong! A breakthrough in science will always result in a breakdown of the old version, the bigger the breakthrough, the less relics are left over from the older version. I am here re-introducing what Mach and Maxwell have already shown exist, the theory of absolute velocity. This theory together with its natural and scientific consequences just dwarfs out Einstein's tweaks upon Newton's laws, which was in itself another scientific breakthrough. It automatically abolishes Einstein's Weak Equivalence principle, which states that there is no local experiment that can distinguishing between the effects on an observer of a uniform gravitational field and of constant acceleration. Of course this is not true for an observer that can locally measure his absolute velocity. This principle is the foundation of the General Theory of Relativity and is now shown to be incorrect. It is known that despite its popularity, the General Theory of Relativity was a failed attempt by Einstein to unify gravity with electromagnetism. This fact led Einstein to become increasingly isolated in his late years and eventually being unsuccessful in his attempts to unify general relativity and Quantum Mechanics. So, that something is wrong with General Theory of Relativity is already guaranteed by its incompatibility with Quantum mechanics and also by the known violation of the Nordtvedt effect. Again, in the Nordtvedt case we see that many metric theories of gravity actually predict that massive bodies violate the weak equivalence principle. Brans-Dicke scalar-tensor theory has come very close to the truth, and in fact successfully linked such an effect to the possibility of a spatially varying gravitational constant. Niels Bohr, the father of Quantum Mechanics claimed "We will never understand anything until we have found some contradictions". Indeed, the most difficult part is not finding the contradictions, but accepting them without trying to discard them as experimental error. My study thus concludes that the theory of absolute velocity should be at the fundamentals of all physical units, since all our experiences and experiments depend on their absolute motion that is the motion relative to our universe fixed frame of reference. 

The universe hierarchy levels found in The Book Of Abraham

In the previous paper I have proposed how the records for the variable measurements of big G can be used to show the actual hierarchical system of our universe. It is quite a simple system being made up of multiple levels of astronomical bodies revolving about central massive energetic bodies. Summarising, the Earth spins around the sun, the solar system spins around the centre of our galaxy Sagittarius A*, our galaxy spins around a yet unknown centre of galaxies, and this centre of galaxies spins around the central core of the universe. I have numbered these levels Macrospin level 1 to 5. Only 4 spin levels (1 to 4) have a central core orbiting another level. Level 5 has a unique body, the core of the universe, the centre of all existence which spins about its own axis. Surprisingly enough, we found mention of such a system in documents dating back to 509BC, which were apparently copies of originals which had been passed over from the days of Abraham, living at about 2000BC. Not only were these astronomical bodies mentioned, but also named, with some details about their periodicity and energy levels.

In July 1835, an Irishman named Michael Chandler brought a traveling exhibition of four Egyptian mummies and papyri to Kirtland, Ohio, then home of the Mormons. The papyri contained Egyptian hieroglyphics. Joseph Smith Jr., the founder of the Latter Day Saint movement, examined the scrolls in the exhibit and noted that some of the text was recognizable because of its similarity to the text from the golden plates of the Book of Mormon. The church purchased the four mummies and the papyri for $2400. The Joseph Smith Papyri have been now determined to be from the late Ptolemaic or early Roman period 753-509BC, which is at least 1500 years after Abraham’s lifetime. Apologists respond that the papyri need only be copies of the original written by Abraham and also claim that there exist some Egyptian scrolls from the same time period that clearly contain the name of Abraham. The Book of Abraham is the translated text from these acquired papyri, published as part of the Pearl of Great Price, one of the four canonical scriptures of The Church of Jesus Christ of Latter-day Saints. The church has never to our knowledge taken any action on this work, either to indorse or condemn; so it cannot be said to be a church publication; nor can the church be held to answer for the correctness of its teaching.

A full copy of the Book Of Abraham is available here

Rosetta StoneThe ability to translate Egyptian hieroglyphs stems from the discovery in 1799 of the Rosetta Stone, a large granite tablet which contained a message written in two languages, Egyptian and Greek. Since Greek was well known, the stone made the translation of Egyptian hieroglyphs possible for the first time since antiquity. Joseph Smith had good knowledge of hieroglyphs but such knowledge was not well disseminated to the United States at the time Joseph Smith made his translation, hence his translated text could not be verified by scholars for accuracy.
I will here only discuss a few statements taken from the chapters of The Abraham Book which are relevant to our topic. These are found on Chapters 3 through 5 describing a vision in which God reveals much about astronomy, the creation of the world, and the creation of man.

Smith gave interpretations for several figures in this facsimile. For Egyptologists, this figure is a hypocephalus. It is placed under the head of the deceased in case he forgot some of the personalized detail needed to know what to say and how to behave in relation to 'gods' and trials after death. Apologists respond that some of Joseph Smith's translations restore the original author's symbolic representations and not the literal Egyptian translations. Other scholars praise Joseph's work, and quoting Michael D.Rhodes "One or two could conceivably be dismissed as mere chance or luck of guessing, but the many correct interpretations taken together are impossible to ignore. It is clear that Joseph Smith knew what he was talking about."


Abraham fax2

The following are some of the most relavant astronomical interpretations translated from facsimile 2 by Smith:

Fig. 1. Kolob, signifying the first creation, nearest to the celestial, or the residence of God. First in government, the last pertaining to the measurement of time. The measurement according to celestial time, which celestial time signifies one day to a cubit. One day in Kolob is equal to a thousand years according to the measurement of this Earth, which is called by the Egyptians, planet Jah-oh-eh.

Fig. 2. Stands next to Kolob, called by the Egyptians Oliblish, which is the next grand governing creation near to the celestial or the place where God resides; holding the key of power also, pertaining to other planets.

Fig. 3. Is made to represent God, sitting upon his throne, clothed with power and authority; with a crown of eternal light upon his head; representing also the grand Key.

Fig. 4. Answers to the Hebrew word Raukeeyang, signifying expanse, or the firmament of the heavens; also a numerical figure, in Egyptian signifying one thousand; answering to the measuring of the time of Oliblish, which is equal with Kolob in its revolution and in its measuring of time.

Fig. 5. Is called in Egyptian Enish-go-on-dosh; this is one of the governing planets also, and is said by the Egyptians to be the Sun, and to borrow its light from Kolob through the medium of Kae-e-vanrash, which is the grand Key, or, in other words, the governing power, which governs fifteen other fixed stars, as also Floeese or the Moon, the Earth and the Sun in their annual revolutions. This planet receives its power through the medium of Kli-flos-is-es, or Hah-ko-kau-beam, the stars represented by numbers 22 and 23, receiving light from the revolutions of Kolob.

Fig. 6. Shows 4 canopic figures, which represents this earth in its four quarters.

Fig. 11. If the world can find out these numbers, so let it be. Amen.


Below are some of the most relevant translations from Chapters 3-4 from the papyrus:

.. And the Lord said to me: These are the governing ones; and the name of the great one is Kolob, because it is nearest to me, for I am the Lord your God: I have set this one to govern all those which belong to the same order as that upon which you stand.

If two things exist, and there be one above the other, there shall be greater things above them; therefore Kolob is the greatest of all the Kokaubeam (stars) that you have seen, because it is nearest to me.

And he said to me: This is Shinehah, which is the sun. And he said to me: Kolob, which is star. And he said to me: Olea, which is the moon. And he said to me: Kokaubeam, which signifies stars, or all the great lights, which were in the firmament of heaven.

And the Lord said to me, by the Urim and Thummim, that Kolob was after the manner of the Lord, according to its times and seasons in the revolutions thereof; that one revolution was a day to the Lord, according to his way of reckoning, it being one thousand years according to the time appointed to that where you stand. This is the reckoning of the Lord’s time, according to the reckoning of Kolob.

And where these two facts exist, there shall be another fact above them, that is, there shall be another planet whose reckoning of time shall be longer still; and thus there shall be the reckoning of the time of one planet above another, until you come right onto Kolob, which Kolob is after the reckoning of the Lord’s time; which Kolob is set onto the throne of God, to govern all those planets which belong to the same border as that upon which you stand.

Howbeit that he made the greater star; as, also, if there be two spirits, and one shall be more intelligent than the other, yet these two spirits, not withstanding one is more intelligent than the other, have no beginning; they existed before, they shall have no end, they shall exist after, for they are eternal.

Turning to the Book of Daniel (Old Testament), we read "And four great beasts came up from the sea, diverse one from another....After this (the vision of the first 3 beasts) I saw in the night visions, and behold a fourth beast, dreadful and terrible, and strong exceedingly; and it had great iron teeth: it devoured and brake in pieces, and stamped the residue with the feet of it: and it was diverse from all the beasts that were before it..." The four beasts, represent the first four spin levels in their destined border or sphere of creation. Their eyes are a representation of light and their wings are a representation of motion or action. Our fifth spin level is usually referred to as the throne of God in all the scriptures.


Interpretation of Smith's translation from an engineer point of view

Although many Egyptologists do not agree with Smith's translation, there have been lots of other people studying hieroglyphs that agree that Smith's translation is correct. One has to consider the fact that Egyptian hieroglyphs are very challenging to translate in that, the same hieroglyph can have a different meaning under different contexts. Many in fact do admit that his translation does agree with todays Egyptologists translation, when viewed from an astronomical point of view. To me, Smith's translation is so close to the description of the universe hierarchy description I gave a few years ago (previous page), that I totally exclude the chances of Smith inventing all this by his own. One has to keep in mind, that Smith's translation of the universe does not agree with any of our present knowledge, and the contents were new to both himself and to any astonomy group at that time, so he could not have been biased by any former knowledge and was probably just doing his best to provide the best direct translation.

Here are the equivalent names for the astronomical bodies mentioned in his text, related to my previous work:

Jah-oh-eh = Earth
Olea = Our Moon
Shinehah = Our sun
Floeese = Our solar system
Kokaubeam = Stars

Enish-go-on-dosh = Energy medium that powers the solar system hierarchy level with Sun at centre.
Kae-e-vanrash = Energy medium that powers different (15) solar systems with Sagittarius A at centre.
Kli-flos-is-es = Energy medium that powers Kae-e-vanrash hierarchy level with Oliblish at centre.
Hah-ko-kau-beam = Energy medium that powers Kli-flos-is-es hierarchy level with Kolob as centre.

Shinehah = Our Sun, centre of Enish-go-on-dosh hierarch level (the solar system).
Sagittarius A = Known as centre of Kae-e-vanrash hierarchy level (our galaxy)
Oliblish = Centre of Kli-flos-is-es hierarchy level (centre of galaxies)
Kolob = The heart of the universe, the main core, centre of Hah-ko-kau-beam, the main source of all forms of energy, God.

The 4 beasts with eyes (lights) and wings (motion) each in its sphere of creation, represent the 4 central cores found in the centre of hierarchy levels 1 to 4. Unlike the other cores, of which a plurality of them can exist, Kolob is unique and usually described as the residence of God. Unlike the other 4 cores, it is a single star whose motion is described as revolutions, that is spinning about its own axis, not orbiting around any other core. The fourth beast is Earth's spinning core. It does not require much imagination to conclude that as our earth expands (we have enough evidence for this) and therefore gets heavier, its core will radiate more energy, its continents will move apart, and the melted iron core will divour all the land into hot molten metal, matching Daniel's description for the fourth beast. With all this information, I can now refine the details set forward in my previous paper as shown below.


Universe hierarchy

Modern astronomy will agree with this model up to the Sagittarius A level. Beyond this hierarchy level, no one knows anything, even if most researchers would agree that they cannot exclude the possibility of the existence of higher hierarchy levels and that the highest known level could in fact be rotating about some yet unknown centre of the universe. Also, another interesting fact is the mentioning of the energy medium for each hierarchy level. Today, we call this in a generalised way by 'vacuum', but as described in many parts of my website, this vacuum is also known to be made up of energy itself. It is therefore obvious that the vacuum energy, or zero point energy, or the ratio of T/S for each hierarchy level is different for each level, that is for each level referred to as spheres of creation, and this supports my point that all known physical parameters which I showed to be based on ratios of T and S, vary with their position on the universe. One must stop and think how our modern astronomy and the whole science would change once we accept the fact that the speed of light changes quite abruptly while passing from one hierarchy level to the next. We are like fish viewing the outside world from under water. Not only the speed of light, but all physical constants we are used to know as 'constants of free space'.



Free energy from space, or better, Energy from Free Space

From the previous section, we can conclude that if a method is found to tap this 'free energy' or better 'energy of free space', then we have got a possible solution for the ultimate clean energy source - the vacuum. So, will dissipating this energy also change the universal constants with catastrophic results? The answer is definitely NO, as energy is never dissipated but just converted from one form to another. The easiest way to tap energy out from free space is by a simple resonant circuit tuned to the vacuum oscillations of this energy and matched to its impedance. This frequency can also be worked out as follows:

Free space frequency fo = 1/T = 7.4E42 Hz

our sun
Surely this tuned circuit will not consist of a lumped inductor and capacitor as we know them, but the concept should be the same. The tuned circuit could easily be formed out of an atomic structure such as a mass with special characteristics. A perfect example would be radioactive isotopes which are tapping in this energy and convert some of it into lower frequency radiation and heat. The perfect material should be very dense with its structure dimensions as small as possible. This explains why most dense (heavy) metals having atomic numbers over 80, start exhibiting radioactivity. Probably the most efficient vacuum energy resonant cicuit in our solar system is the sun, which is converting much of this energy as gamma radiation. Todays scientists see the sun emitting huge amounts of energy but they do not see the huger amounts entering from all directions at higher frequencies, though some of them are assigning such invisible energy to gravitons. After using up a part to make up the sun's mass, this high frequency energy reflects back at a lower frequency toward the surface, the energy is continuously absorbed and re-emitted at lower and lower frequencies creating the sun's own matter spherical standing wave, so that by the time it reaches the surface, it is primarily low frequency heat and light. The difference between the incoming energy and the resulting heat and light energy, simply equates to the sun's own mass! Again, we notice this concept is clearly present in Smith's translation for figure 5 - '...the Sun, and to borrow its light from Kolob through the medium of Kae-e-vanrash' , which precisely states that the sun borrows its energy from the core of the universe, through the energy medium surrounding Sagittarius A*. Such downshifting of frequency requires a huge mass, and not simply a ball of hydrogen. Studies at the American Astronomical Society's meeting in Washington D.C. have indicated that indeed our Sun has a heavy metal core, read Prof.K.Oliver Manuel on Iron rich Sun. It is highly probable that our naturally occuring free energy converter (the sun) is made up of a highly radioactive, long half life heavy elements, and whose iron seems to be its main stable by-product, which guarantees its long life time. The sun is actually producing energy downconverted from the highest electromagnetic frequency range, we call gravity, and not from hot fusion! One should recall that the common or currently-prevailing scientifically-popular belief that the sun obtains its power from hot fusion is just a theory which at best is only speculative in nature, and no physical evidence shows that this theory merits any more credibility than other theories. The concept presented here and elaborated in my EMRP Gravity theory is a much better theory than the classic one proposing a sun being made up of an inexhaustable hydrogen mass. The fantastic thing about energy of free space, is that anyone can tap a non zero value of power from every point in space! In one of his monologues, Mitar Tarabic, a prophet from Kremna, talked about such free and inexhaustible energy source:

"Instead of mowing and using haystacks, people will dig holes everywhere, and the force will be all around but it will not be able to say: "Come and get me, can't you see I am everywhere." Only after many years will the people remember this force and see that their holes were in vain. The force will be in people themselves, but a lot of time shall pass until they recognize it and start to use it... When that happens, people will be sorry that they did not do it earlier, because it is very simple."  

Radioisotope Thermoelectric Generators (RTG)

Most of you have probably learnt at school that radioactive isotopes are sources of radioactive emissions. Some radiate alpha, some beta, and some also generate X-ray and gamma rays, but no one ever told you that these isotopes are NOT the sources of such radiations but simply energy converters of the incoming planck frequency band radiation making up all matter, the same energy responsible for gravity. Most matter is not dense enough to create any noticable downshifting of the incoming radiation frequency (energy), hence we are not able to detect either incoming or outgoing radiation for most common substances. However, we notice that for mass numbers above that of lead, the high density of matter results in noticable frequency downshifting of this incoming energy, and we start to detect radiation in the highest bands of our presently known spectrum. The more dense is the substance, the more of incoming energy is trapped within its standing wave structure, and the less energetic is the outgoing/reflected energy. The less energetic the outgoing energy, the lower is its frequency, well, low enough for us to be able to detect them in the upper part of the known electromagnetic spectrum. In fact if one tries to slightly shield a gamma source with some aluminium foil, the aluminium foil will act as a downshifting device and generate X-rays. Yes, X-rays will be emitted from the other side of the aluminium foil, but no body ever says that aluminium generates X-rays. You can now finally understand why a radioactive isotope is simply tapping or downconverting this sea of energy of free space (ZPE) and not generating any radiation by itself.

Here I shall quote a very interesting and relevant statement written by Tesla, dated 10th July 1937. He says:

"There is no energy in matter other than that received from the environment. It applies rigorously to molecules and atoms as well as the largest heavenly bodies and to all matter in the universe in any pahse of its existence from its very formation to its ultimate disintegration."

RTGs It is a known fact that radioactive isotopes like Plutonium produce heat. This fact has already been exploited within RTG's. RTG's have proven their safety and capability in many space missions, including human missions. Radioactive material (plutonium 238) is used to produce heat, which is converted to electricity either by thermoelectric devices, such as peltiers and thermocouples, or by thermionic effect. When a material gets very hot (such as the hot filament in a television cathode ray tube), it can emit electrons from its surface. In a thermionic RTG, this electron emission is a direct source of electrical current. The Plutonium is not placed as a pure form in the RTG, but is installed as bricks of plutonium dioxide (PuO2), a ceramic which, if shattered, breaks into large pieces rather than smaller, more dangerous dust. The plutonium dioxide is encased in layers of materials, including graphite blocks and layers of iridium. Both materials are strong and heat resistant, which protect the plutonium bricks in the event of a launch explosion.

The RTG uses only decay heat, meaning there are no nuclear reactions involved, and also that the radioactive material can be encapsulated to prevent release into the atmosphere. As long as the capsule is not tampered with, an RTG is the nearest thing to a clean free energy device, directly converting free space energy to heat and electricity.
The above diagram shows the RTG used on board the Cassini, a NASA space probe still in operation. An RTG is fuelled with about 10.9 kilograms of plutonium dioxide (a ceramic form that is primarily composed of the plutonium-238 isotope) and can initially generate about 280 Watts of electrical power, and after ten years still be able to generate about 230 Watts of electrical power. Half-life time is 87 years. The outer shell is basically a heatsink, in contact with the cold side of the Si-Ge unicouple array. Two RTGs are needed to generate the 400 Watts of power that the Cassini orbiter needs. 

Existence of Higher dimensional space & Perception of time

The observation that spatial dimensionality is limited to three dimensions has been for long a puzzle to scientists. Our mathematics do not limit us to three dimensions. Why are there only three dimensions? We are 3D observers, and this makes it easy for us to conceive the observed reality as 3D. We can also quite easily conceive a 2D universe as a subset of our 3D universe, and we see how complex the explanations can get in a 2D universe, for events we find simple in our 3-D universe.

The relativity theory made spatial dimensionality elastic. The space-time continuum was conceived. Four dimensional space-time was proposed and attempts to visualize a 4-D space, as an extension of our 3-D world, became popular. We talk about 3-D space being curved around some 4-D sphere like the atmosphere around the earth.

In science fiction, discussion of alternate planes, or dimensions of existence, have become ingrained. Religious "Heaven" has been moved from the stars and galaxies to these alternate dimensions. In this section I will show you how to scientifically understand higher dimensions, which will hopefully lead you to better understand the higher dimensional universe which we all form part of.

Many modern physicists, in their attempts to unify theory, have proposed the existence of many space dimensions beyond three. The multi-dimensional efforts at grand unification have indeed mathematically helped describe theory and predict experimentally observed facts, but attempts at 4D visualization seem hard indeed. We talk of extra dimensions being curled into minute 3D spaces.

One should keep in mind what we are with respect to the space around us. The answer is that each one of us is a 3 dimensional spatial observation point in space, and that dimensionality is not a property of 'reality', but of the being, the observer. Instead, our spatial dimensionality is a characteristic of our conceptions, our mind. This means it is a characteristic, or property, of knowledge rather than of reality. Spatial dimensionality is a property of the observer rather than of the observed.

So, is everything observed around us just an illusion? Not at all, the things around us will still exist even if no one looked at them. To say spatial dimensionality is a very powerful tool may be one of the all-time greatest understatements. However, if spatial dimensionality is a property of our knowledge, then it is not a complete universal truth, but just a shadow of the truth. The answer, of course, is that our spatial dimensionality is based upon what we see. Of all the senses which a typical person possesss, sight is the one which plays the greatest role in the perception and conception of reality. The perception of spatial dimensions does not have to be based upon sight, hearing or any of the other senses. Our eyes are essentially 2D arrays which sense light reflected from viewed objects. Therefore, we never actually 'see' three spatial dimensions. We see (perceive) stereographic 2D pictures. In our mind, we conceive the existence of a third dimension using two stereographic pictures. As you see, our mind is already 'too busy' converting 2D sensed data to reconstruct a 3D observation picture of reality. For humans to visualize a world in more dimensions than 3D is no trivial task. It may even be impossible, without physically modifying ourselves. If dimensionality is not a property of the universe, but of ourselves, then our attempts to 'visualize' 2D and 4D universes in terms of our 3D abilities is not only futile, it is nonsense. The reality perceived by a 2D being is the same reality as perceived by a 3D being and a 4D being. Their methods of description will vary greatly, but they are each attempting to describe the same thing.

This alternative perspective on spatial dimensionality has offered a rational answer to the question of why do we conceive the universe to be limited to three spatial dimensions. The answer is the universe is not limited to 3D, and most scientific evidence points to higher dimensional universe, but it is we who are limited, due to our senses. Another dimensionality issue that is answered is that of the co-existence of multiple dimensions beyond three. This issue becomes nonsense. An object cannot pass to another plane or dimension of existence, because these planes or dimensions do not exist. No dimensions exist except in our minds.

Dimensions are powerful tools which we use to organise, live and understand the universe. It seems reasonable to believe that a being who can conceive an "n"*D universe can develop more elegant knowledge that a being who can only conceive an "n-1"*D universe. In essence, the more dimensions we can conceive, the more about the universe we can understand. TIME is only a way to organise information about the n*D universe, for all those mysteries which we have not been able to fit entirely into our (n-1)D spatial dimensionality framework. Remember the initial hypothesis was that a being who perceives "x" dimensions, will conceive the universe in "x+1" dimensions. We are now expanding the hypothesis to say that a being who perceives "x" dimensions, will conceive the universe in "x+1" dimensions where the "+1" is "time." Therefore, as beings may increase the total number of dimensions in which they perceive and conceive the universe, there will always be a temporal dimension to the universe for the beings. In the case of a limited dimensional universe of n*D dimensions, then the universe (reality) will be the being (the n*D observator) itself and that is the only possible non-temporal dimension.

If we could increase our perception to 3-D so we could then conceive a 4-D universe, many phenomena which we now describe as occurring at different times would then be described as occurring at different spatial locations. The progressive increase in spatial dimensionality moves explanations from the infinite reservoir of "time" to spatial locations. However, even though the number of spatial dimensions may increase without bound, the conception of "time" remains constant for all beings, from 0D to 3D to "n"D.

From these ideas one can deduct, that we are 3D spatial observation points observing a multidimensional universe around us. For us 3D observers, the "+1" dimension cannot be spatially observed, so our mind perceives different 3D pictures changing through 'time'. Time being the "+1" dimension is so embedded in our minds, that subconscious brain functions may be "hardwired" to better enable its "conception".

Current scientific knowledge is based on a 3D based reality which seems to get in trouble when small dimensions of length or time are involved. Science is now talking of energetic particles that randomly pop in and out of existence, which doesn't make sense if we do not try to understand how higher dimensional universe may work. It is a fact that at the time of writing, the best candidate unified theory is fully compatible with this higher dimensional space theory, namely the supersymmetry.

Supersymmetry is an idea that has been around for decades. It states that every boson has an associated fermion and vice-versa. So a quark, which is a fermion, has a supersymmetric imaginary partner called a squark, which is a boson. Likewise a photon, which is a boson, is teamed up with the photino, a fermion. None of the proposed supersymmetric particles have ever been detected. Scientists say this is because current particle accelerators are just not powerful enough. Science knows that these imaginary components MUST exist, but will never be able to detect/isolate them with the current methods, for the simple reason that they are imaginary. Note that the term 'imaginary' is a mathematical term and does NOT mean 'non-existent'. Any form of matter interpreted in our space-time dimension can be mathematically expressed as a complex (Complex = Re+Im) function of space and time. Lately, some evidence that supersymmetry is real may have emerged from a study of gold and platinum atoms. Teams from the Ludwig-Maximilians University in Munich and the University of Kentucky in the United States have used the Tandem accelerator in Munich to bombard gold atoms with sub-atomic particles. The results of the interactions between the targets and the projectiles, they say, can only be explained by supersymmetry. This is the way to go, since we can only observe these imaginary particles through the motion of the real part.
Understanding 1 dimensional space

Supersymmetry involves the concept of multidimensional space. In order to understand dimensional spaces higher than three, let's start with the simplest 1D case, that of a 1D observer - a line. You might think, well that's quite easy. In fact it is quite easy, but if you really understand it, you might use your knowledge to understand higher dimensions. The amination below shows the observer as a grey line, who is trying to percieve a reality (a 2D circle in this case) in his 1D limited mind. The animated blue line is what he perceives. Note that the reality, the circle, is not changing in time, its radius, colour and all other properties are a part of the reality. The observed thing is very different from this, it is a blue line varying in length WITH TIME. For the observer, it remains a mystery as to what happened to the original full length of line, why and how it changes length and 'pops in and out' of his 'observed reality'.
Understanding 2 dimensional space

Let's now start analysing a 2D case, that of the classic Flatland example, in which a person lives in a 2D universe and is only aware of two dimensions (shown as the blue grid), or plane, say in the x and y direction. Such a person can never conceive the meaning of height in the z direction, he cannot look up or down, and can see other 2D persons as shapes on the flat surface he lives in.
Now we know that 3D space exists, and can conceive that, because we see each other in 3D space. So, what does a 3D reality sphere look like into a 2D plane? The answer is again graphically shown in the animation, which shows a circle expanding and contracting depending on which slice of the sphere intersects the 2D observation plane. In the 2D plane, the thickness of the plane tends to zero, but is not absolute zero. There must be enough thickness for the circle to form and be observed. Thus, the 3D sphere is being differenciated with respect to one of its spatial dimensions (z in our case) across its diameter. Actually, in the special case of a sphere, it could be intersecting the plane at any angle to the z axis, and still be perceived as a perfect circle in 2D. For the person that lives in 2D, the only way to recognise such a 3D structure is through integrating all the circles he sees, on top of each other. But here is the problem, he cannot imagine anything 'on top of each other'. A clever 2D guy has just one simple way to refer to this z-axis, which is constantly differenciating the 3D object, and that is TIME.

I admit this concept is quite hard to grasp, especially when one moves on to describe a 4D universe differenciated by a 3D space, with both real and imaginary axis. The imaginary space dimensions can be pictured as follows. Just try to imagine a person in front of a 2D plane surface, but this time a mirror surface. The person is equivalent to the real part and his image in the mirror is equivalent to the imaginary part. Imagine also that such a mirror is present everywhere he can possibly move. So, the person becomes DEPENDENT on the existence of his imaginary component. That is, if the image is no longer present in the mirror, then one can deduct that the person can no longer exist in reality! Now this was an example of a 3D image reflected on a 2D plane (the mirror).
Understanding 4 dimensional space

Recall ages ago, when most people believed that the earth was flat. Some thought that they would "fall off the edge" of the earth if they went out too far. Little did they know, that if they kept on going, they could possibly end up where they started, having experienced the entire trip as being in a straight line! No matter how far the subject travels (by boat, train, or plane), he will never come to a boundary: there is no "edge" to fall off from!! It is because the earth exists on the surface of a sphere that these properties hold true. Let us now take this a step further.

Launched from the earth is a rocket ship that is travelling out into space. Its mission is to continue outward in a straight line in its current direction until it reaches the "outer edge" of the universe. When will the rocket ship reach the outer edge of space? In the previous example we find a similar situation: the concern of "falling off the edge" of a flat earth - an earth that in reality has no "edge" to fall off from. Now, if our universe reality is not 3D we will find out that the ship will never encounter an outer edge. Not only that, but it could also possibly end up where it started, having experienced the entire trip as being in a straight line! No matter how far the rocket ship travels through space, it will come across no boundary of any kind. These properties would hold true if the universe existed on the surface of a hypersphere in the same way that the earth exists on the surface of a sphere.

The hypershpere is the 4D analogue to a circle in 2D or of a sphere in 3D. How would we picture a hypersphere? The key to approaching something of the fourth dimension is by means of the tool of analogy: we rely upon corresponding lower-dimensional structures we have studied as the means by which their 4-dimensional analogue is constructed. A solid circle is a 2-dimensional object. When cut into 1 dimensional slices, you will see a line, that varies in length between the size of a single dot to its full length. A solid sphere, as shown above in the flatland animation, is a 3-dimensional object. When cut into slices, we find that a solid sphere is in essence an array of solid circles that increase and then decrease in diameter. Having obtained the knowledge we have so far, we now possess the ability to bring these lower-dimensional structures "up a notch" through analogy to envision a 4D hypersphere.
We cannot directly visualize a hypersphere for the very reason that it is a 4-dimensional object and goes beyond our senses. What we can visualize, however, is a hypersphere in the form of 3-dimensional slices (as is displayed to the left). A hypersphere is in essence an array of 3 dimensional solid spheres that increase and then decrease in size. This would represent our basic conception of the hypersphere, and is shown in the animated picture here.
As I have said, in 4D space, our 'time' is integrated in a space dimension, and then action at a distance (gravity being the purest example), becomes much clearer to us. Just imagine, in the classic 2D example shown above, that the 2D person is somehow able to impart a force on the circle he sees on the plane. What would the consequences be? He would eventually move the whole sphere and would also change the position of the future circles in the plane. He would also move all points on the circle, as if all points are 'entangled', and the transmission of this force from the point of action to any other point on the circle does not depend on the time it takes for the sphere slice to pass through. So, to the question, is gravity a push, a pull or both, or does gravity act on a body, or is gravity generated by the mass of a body, there is no answer if the problem is analysed only in 3D space, as the interaction between two bodies is just an effect we see due to the interaction on a single body existing in a higher dimension. The interaction between the two different dimensions takes place in the 'mirror plane' where the time dimension does not exist, but is rather a perception of the observer. That also means that issues like 'the finite speed of gravity' clearly make no sense.

If you try to extend this to our existence and to the existence of all matter, you will find that all actions (including gravity) are at work at a higher dimension and we are here in 3D space observing the effects that are being played at this higher dimension. The 4D I am referring to, is quite different from Einstein's 4D Space time, in that it is a 4D space and no time. The time coordinate comes in as a false perception of the 4th space dimension, which we are unable to imagine, analogous to the flatland man who cannot understand height and depth. In this figure, you can see what a 4D sphere looks like when differenciated in 3D space. When one differentiates this 4D dimension with respect to an infinitely small mirror thickness (Plancks length being the best candidate), then you get the universe we observe, with Plancks time being the time taken for each 3D slice to pass through the 'thickness' of the mirror, and such universe is equivalent to Einstein's Space-time.

So, what is the speed of light? The speed of light can now make more sense, it is the thickness of the mirror divided by the time it takes for the next slice. It is the maximum speed of differenciating the 4D reality from a 3D spatial point of view. In our context, the value would be equal to Planck's length/Planck's time which is equal to c, the speed of light. That's why Einstein's theory of relativity although correct, CAN NEVER give us all the answers to our questions, because it is NOT COMPLETE. As Rudolf Steiner stated: "Anything dead tends to remain within the three ordinary dimensions, while anything living constantly transcends them". Applying the same rule to everything, we may modify this statement as "Anything stationary exists in the ordinary 3D, whilst anything moving is being constantly differentiated in each 3D plan,e and hence exists in the fourth dimension". This statement is thus defining the 4D space as space in motion with respect to itself. Click here for an excellent site discussing Space in motion.

What's the evidence for the existence of higher dimensions?

In physics, the inverse square law relation is quite common. This relation is valid for the gravitational attraction between matter, for the electrical forces between charges and for magnetic forces between moving charges. A force that varies with the square of the distance means that the force will increase with the square of the distance if we reduce the distance, and it will decrease with the square of the distance if we increase the distance.
Electromagnetic energy decreases as if it were dispersed over the area of an expanding sphere, 4piR2where radius R is the distance the energy has travelled. The amount of energy received at a point on that 3D sphere diminishes as 1/R2. This clearly shows the origin of the inverse-square law.
Here is a table showing the volume and surface area of hyperspheres of different dimensions:


Dimension (n) Shape Volume Surface Area
2 circle π r2 2π r
3 sphere (4/3)π r3 4π r2
4 4-sphere (1/2)π2r4 2r3
5 5-sphere (8/15)π2r5 (8/3)π2r4
6 6-sphere (1/6)π3r6 π3r5
7 7-sphere (16/105)π3r7 (16/15)π3r6

As a result, a force that varies with the square of the distance can be considered as a conventional 1-dimensional force vector (x-axis) that is scattered into 2 additional dimensions (y, z) due to the 3-dimensional nature of space. The square power of the distance indicates the projection of such a force over a 3D spherical surface area. But what happens if the force is also acting in higher order dimensions? What if the force is originating force is being projected on a higher dimensional surface area? Are there forces which vary to other powers than the inverse square law?
The Casimir force related by the above equation is known to vary as the inverse d4, which is two orders of dimensions higher than the more common forces, and coincides with a force projected over the surface area of a 5D hypersphere (see table above). Such force that varies with the fourth power of the distance can be thus considered as a force vector that is scattered in a 5-dimensional space. Therefore, it is evident that the field that originates the Casimir force is a 5-dimensional field, that it is in fact a hyperspace field that produces the corresponding effects in our restricted 3D vision of our universe.


Can dimensions be limited, or is the universe really infinite

From our point of view, the universe seems to be infinite, and it seems that it's not only infinite but even ever expanding. Now that you should be able to understand how our seemingly 3D space time universe can all fit in a 4D hypersphere, which in turn can fit on a surface of a 5D hypershere and so on, where a difference in time is equivalent to a different point within its volume, you can understand why the universe as seen by a 3D observing creature/mind has no limits.

Just imagine one of those 2D creatures who cannot understand what is height in the z direction and put him on the surface of a sphere. He would walk round and round searching for an edge for ever, and finally he may conclude and even prove that the path is infinite. Same applies to a 1D creature going round a simple circle, and therefore same applies to us 3D creatures living and travelling around in a 4D universe! In general we can say that a creature with n*D observation capability, will observe an (n+1)D dimension universe as infinite. We also learn that for an n*D observer, the only way to observe a universe of a higher dimension than himself is to 'walk around it' and memorise. A 1D creature cannot understand what is a circle other than observing all the points making it up, one by one. Similarly a 2D creature cannot understand what is a sphere other by observing the flow of circles making it up. We see that in all cases, walking around, or observing the flow through time, is necessary to observe a higher dimensional space.

The question is, how can we know how many dimensions is the universe made up from. All the arguments mentioned above can be applied to any dimension and would imply the possibility of an infinite dimension space. But mathematics shows us that there are yet unknown reasons for which an ultimate dimension may be reached. One very interesting curve is the plot of surface area of hyperspheres of different dimensions, shown below. One would easily think that as we go higher in dimensions, the surface area of the n-sphere would increase at each stage, and yet, something very strange occurs, as a maxima in its surface area is reached at the 7th dimension. Could this indicate the real ultimate dimension of the universe?.

Dimension Volume Area
1 2.0000 2.0000
2 3.1416 6.2832
3 4.1888 12.5664
4 4.9348 19.7392
5 5.2638 26.3189
6 5.1677 31.0063
7 4.7248 33.0734
8 4.0587 32.4697
9 3.2985 29.6866
10 2.5502 25.5016
What would an n*D observer see if the universe in which he lives in is his own n*D dimensions ? - the answer is 'a still, or static (frozen in time) spatial shape of n*D dimensions'. A 2D creature does not need to move around the circle to recognise it or know anything else about it, and a 3D creature does not have to flow through circular slices of a sphere to recognise a sphere. Note that the actions move and flow both require the time dimension to make sense, but recognise is an act that reacts to the shape of a static structure and needs no time. For an n*D observer, the n-dimensional universe is static, lifeless, and does not change through time, but has all the knowledge of what's within all lower dimensions. Let's name this ultimate n*D observer as the universal observer. For the universal observer, time does not exist, since both himself and the universe are the same thing and neither himself nor the universe is effected by time in lower dimensions, and from a lower dimensional observer point of view he can be said to be existing from eternity to eternity.

For those mathematically minded, let's take a car accelerating in a road. If we integrate the observed acceleration m/s2 with respect to time we get a car driving at a velocity measured in m/s. We have thus moved the motion of the car one dimension up with repect to time. If we integrate further the velocity with respect to time, we get the total distance covered in metres, no time. So did the road distance exist before or after the car started acceleration? As you see the 'road', the time independent dimension is NECESSARY for all other actions (differentiations with respect to time) to take place, and hence the universe should be limited in its number of dimensions, with the highest dimension being time independent, and being the universal observer itself.


References:

Hypersphere - Wiki
Hypersphere - Wolfram
Hypersphere volume derivation (pdf)- A.E. Lawrence
Global spec CR4 - Hyperspheres general equations











Nenhum comentário:

Postar um comentário