Categories

Blog Archive

EVPs: Frequencies and Software

October 27, 2018

Sound and how we hear it

I felt that I should start this paper with the fundamentals of sound and human hearing. This is only the basics as they apply to the research that we conduct and is not meant to represent an all-inclusive report on the science of sound and acoustics, although I do plan to develop more reports in the future. Such as, dealing with Frequencies, Acoustics and Radio waves in field investigation locations, Brain waves, Telepathy and Quantum Superposition as well as several other areas of interest I have in regards to theories. This paper is written in my words so anything I get wrong is completely my fault; I am by no means an expert in this field. But for now let’s talk about sound.

Sound is produced by vibration in an object when it moves the molecules of the medium (i.e.; air, water or solid) that the object is in, in waves of pressure. These waves are called Frequencies and are measured in Hertz (Hz). Frequencies are best described as a unit of measurement of a wave traveling over time, basically how fast these waves are moving. These frequencies (waves of pressure) travel through the medium and our human ear perceives them. Our normal range of hearing is between 20Hz and 17-20 kHz. Our best hearing is at frequencies between 1 kHz and 5 kHz. The human voice is between 300-3kHz but can range from 20-14kHz. The record is around 25 kHz for someone who sang in soprano. Sound above 20kHz is called Ultrasound and below 20Hz Infrasound. Sound waves cannot travel in a vacuum while Radio Waves, which are electromagnetic in nature, much like radiation and x-rays can.

These waves have weight compared to the atmosphere they are in. This is called sound pressure and is measured in p or Pascal (symbol: PA). When referring to Sound Pressure Level (SPL) we use the Decibel (dB) although it is not a true unit of measure, it does allow us to give weight to the pressure in comparison to the frequency.  Decibel is more or less how “loud” a sound is perceived to be. Normal speech is between 40-60 dB. Sound above 85dB can damage the ear and in excess of 130dB would cause pain and permanent damage.

Our normal hearing ranges are from 0db -180db, but this depends on the frequency at which they are traveling, as you can see by the chart (Figure 1.) below. We all hear differently depending on age as well as other factors like environment and distance for instance. A young child would have the best hearing while for adults hearing loss has begun to occur and by the age of 35 may be reduced by as much as 11dB at 8 kHz for males but only a 5 dB decrease in sensitivity for females in that same age group, showing that gender also plays a role.

A change of 3 dB at mid frequencies is just noticeable by the average person. Some people have trained themselves to hear much more subtle differences. However, the average person will probably not notice changes until they are around the 3 dB or more in difference. This ability to detect differences gets harder for everyone at low and high frequencies

                                       Figure 1.

As you can see from the Blue dot (Figure 1.)that a sound at 70dB with a frequency of 1kHz can be easily heard, but looking at the Red dot we see that a sound at 50dB with a frequency of 30hz is not audible to human ears, even though a sound at 0db at 3kHz (represented by the Orange dot) should be heard by most humans.

Furthermore, the closer two frequencies are to each other the harder it is to distinguish         between them. For instance a frequency at 330Hz reaches the ear at the same time as one at 335Hz, the ear either doesn’t distinguish between the two or if they are both the same pitch it will make an average  i.e.: the ear hears 332.5Hz. At around 10Hz difference you get a “beat” effect. 

Reading Frequencies Response Curves

Now we need to talk about Frequency Response Curves as seen on a graph (Figure2.), like we would find in our audio editing software when we look at a spectrum analyzer.  The first thing is what the Horizontal and Vertical Axis mean. The Vertical Axis is usually the sound level in dB. Louder sounds are higher, while quieter sounds are lower at any given frequency. What is important in a frequency response curve is not what the actual numbers are, but how much they vary from frequency to frequency. The horizontal axis is the Hertz given in even numbers. Spectrum analyzers in software will allow you to choose your scale for both dB and Hz, but more on that later.

 

                                               Figure 2.

 

The db axis is both positive and negative integers.  This is how loud we perceive the sound, an increase of 10dB is ten times the power however it will actually only ”sound”  twice as loud to the human ear.

In our next graph (Figure3.) we have the dB scale set at 8. This curve looks like a large gain (difference) in the sound level. However when you look at our next spectrum graph (Figure4.) you notice that it looks fairly smooth with very little gain. However they are both the same frequency curve, just shown on different scales. The first was a 10dB scale while the second shows an 80dB range. So now we know that how large the wave appears to go up and down is not important but the number of dB that it goes up and down is what matters.

                                                                                                    Figure 3.

 

 

 

                                                    Figure 4.

The Horizontal Axis is our Hertz (Hz). We, as paranormal investigators, are interested in both high and low frequencies so we would want to set this scale accordingly by applying our scale from the lowest setting, either 0 or 10Hz and setting the upper at around 22-25kHz.

EVPs: What are they?

Ok so now we know a little about sound and how we perceive it, but what about that great EVP that you think you just got. What do we do now? Well this is where it gets real tricky. First let’s talk about EVPs (Electronic Voice Phenomenon) themselves. An EVP cannot be heard by the human ear whereas an AVP (Audible Voice Phenomena) can be heard by anyone in the vicinity of the sound. EVPs are electromagnetic in nature, this is to say that several experiments have been done in vacuums and still EVPs where recorded. In one experiment, conducted by David Rountree in the late 70s or early 80s, they caught EVPS in a vacuum but they discovered that the evacuation pump had not only removed the air (matter) but had also removed the diaphragm from the voice coil in the microphone yet they still recorded disembodied voices. This does not prove that these where the voices of spirits only that they were not sound. Only electromagnetism works in a vacuum, so they had to be traveling on magnetic waves like radio waves or they were radio waves. So EVPs are not sound related so to speak.

I don’t want to go into great detail here on all the different types of microphones and I’ll just say that best mic is the Dynamic Microphone because; once again it uses electromagnetic properties to transmit the sound to the recorder. Also it’s always best to use a separate mic from the recorder, not the internal mic it comes with. The best EVPS are usually caught on Cassette Tape and the reason seems to be that on tape (versus Digital Recording), to make a long story short, the audio is written using, you guessed it electromagnetism. The way a cassette recorders works is  an electromagnet (the recording head) applies a magnetic flux (frequency) to the oxide on the tape. The oxide on the tape permanently remembers the flux it sees.

Thanks to years of research from guys like David Rountree we now know where to look to start to rule out what is not an EVP. Human voices have a distinct pattern in phonetics and can be viewed on a spectrograph. From what we know about sound from what we have discussed so far you would think that it would be easy to differentiate between what is a potential EVP and what is not using software like Praat, a free scientific software program for the analysis of speech in phonetics.  Unfortunately an EVP will have a lot of the same characteristics as normal speech. It is even thought that if you had a recording of a person’s voice while they were alive you could analyze it and compare it to the EVP of the deceased and this may very well be possible however even the experts are never 100% positive on a voice match and you would also have to have the exact same words recorded from the person while they were alive to compare the two. So where does this leave us? If we want to rule out as many possibilities as we can then we need as much data as possible to go along with the EVP event. For instance using software like Praat (which I plan to do a paper on its usefulness both in the field and in review) you could compare the recording of the EVP’s spoken word or words to the exact same recording of each one of the investigators (reciting the same word or phrase) that where on site to rule out the possibility of it being one of their voices. Also, and this is my recommendation, you need to do EMF sweeps of the entire investigation site documenting the data on a grid map. Also EMF should be monitored throughout the site with mutable meters placed at strategic points. Then during the EVP session you not only record the sound but also the EMF spikes that occur. We know that human language is at a low frequency compared to RF waves so a specialized meter (Oscilloscope) should be used that records at those levels and it should be feed into its software using a data port on the same computer that is recording the EVP session. This has been done with success and the EMF wave form was a perfect match for the sound wave form of the disembodied voice proving that the words where not spoken by a live human being on that investigation site. The DiamondCut 8 software should be the recording program used during the EVP recording session due to its ‘real-time’ capabilities. The operator can monitor both the wave forms from EMF and sound on the same computer using the spectrum graph from DC8 and the meter software.  The microphone (preferably an external dynamic Omni-directional mic) for these experiments should also be placed inside an aluminum foil lined box with the “head” facing the opening but still behind the inside leading edge of the box. The aluminum foil will act as a shield against unwanted interference.

There should be stringent guidelines set down for investigation sites and performed each and every time in order to obtain any kind of conclusive data. I could go into great detail on all the possibilities of what can cause interference (EMF, Harmonics and much more) in a home or building but that will be discussed in another paper. I hope this is of some help and I know it will raise more questions than it answers but that is the way this field is. There are so many topics to cover just in dealing with EVPs, which is only a portion what all an investigation entails, that it will be a long paper to cover them all but I can’t stress enough that EMF effects the entire investigation site, that is the event horizon and we must keep this in mind at all times.

 

 

Jay W. Prather



 

Internet Radio Research

October 27, 2018

This is simply my reflection on a subject that needs a lot of research. A few years back(2011-12) I was working with internet radio. My results were less than impressive even down to nonexistent at times. Before I gave up and tabled the project I spent a few months reviewing where I might have went wrong or what might be causing the lack of verifiable communication. Here are my limited and very uneducated findings.

First I wanted to look at radio waves versus the internet, apps and sound bank...


Continue reading...