Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
LedStrip - Audio - Loudness
#1
Hi,

I was inpressed from http://jared.geek.nz/2013/jan/sound-reactive-led-lights
I think a very interesting animation.
But how to get this with Bibliopixel working, that
- 1 LEDStrip WS2801 (e.g. 60 LEDs)
- Raspberry Pi SPI
- FFT/Numby/pyaudio?
for getting the loudness
and transform into rgb for the leds
- set the leds for the Ws2801 Driver

I already tested https://github.com/ManiacalLabs/BiblioPixelAnimations/blob/master/BiblioPixelAnimations/matrix/FFT_Audio_Animation.py
but this is for LEDMatrix and a Spectrum/Frequency
(btw. how to set the sensitivity for this code? I have to shout into the mic :-(

I think I/we can use the code from the link above combined with Bibliopixel code?

Would be great to get this running.
#2
So... this is definitely a popular one, but live audio processing is kind of a pain in general. It's darn near impossible on windows and seems to be a crap shoot as to if and how it works on Linux. Our FFT animation works on some systems and not really on others. And I didn't write that animation so a bit ashamed to say I'm not 100% sure how it works or how to modify the sensitivity. I'll have to look through it more.

I see no reason the code from the first link wouldn't work.... in the end, it outputs RGB values.
The "noisiness" var gives you the sound level and they take that value and map it to a hue value.

I think the "noisiness" value is 0.0 - 1.0, so I would do something like this

h = int(255*noisiness)
c = colors.hue2rgb(h) #our built-in hue/hsv functions are MUCH faster.
led.fill(c, start=0, end=int(led.numLEDs*noisiness))

Do this in every step() of the animation. You'll get a pulsing bar that changes size and color with the music.

Again, this assumes noisiness is 0.0 - 1.0, the code would be a little different if not, but that assumption made it really easy to cheat at the value mapping.
#3
Thanks for the hints. Start some Quick&Dirty modifications on the existing FFT/pyaudio UserAnimation to test if it would generally work.

At the moment not so complicated as in the linked example from http://jared.geek.nz/2013/jan/sound-reactive-led-lights

Volume analysed with "audioop": audioop.rms(self.rec.audio, 2)
and calculated in some way to 0.0 -> 1.0 values (not fine at the moment) :-(
I have to read more to the specific topics to understand everything ;-)

Also I have to wait for a better mic, the current cheap one is not so fine (without soundcard).

But the proof of concept is working ;-)

If there is someone with improvements, I'm very interested !!!

Original is from FFT_Audio_Animation.py
PHP Code:
#!/usr/bin/env python
#
#
# Third party dependencies:
#
# pyaudio: for audio input/output - http://pyalsaaudio.sourceforge.net/
# numpy: for FFT calcuation - http://www.numpy.org/

import audioop
import argparse
import numpy
import struct
import pyaudio
import threading
import struct
from collections import deque

from bibliopixel import LEDStrip
from bibliopixel
.animation import BaseStripAnim
import bibliopixel
.colors as colors


class Recorder:
    
"""Simple, cross-platform class to record from the microphone."""

    
def __init__(self):
        
"""minimal garb is executed when class is loaded."""
    
self.RATE=48000
        self
.BUFFERSIZE=2**12 #2048 is a good chunk size
        
self.secToRecord=.1
        self
.threadsDieNow=False
        self
.newAudio=False
        self
.maxVals deque(maxlen=500)

    
def setup(self):
        
"""initialize sound card."""
        
#TODO - windows detection vs. alsa or something for linux
        #TODO - try/except for sound card selection/initiation

        
self.buffersToRecord 1

        self
.pyaudio.PyAudio()
        
self.inStream self.p.open(format=pyaudio.paInt16,channels=1,rate=self.RATE,input=Trueoutput=False,frames_per_buffer=self.BUFFERSIZE)

        
self.audio=numpy.empty((self.buffersToRecord*self.BUFFERSIZE),dtype=numpy.int16)

    
def close(self):
        
"""cleanly back out and release sound card."""
        
self.p.close(self.inStream)

    
### RECORDING AUDIO ###
        #BufferOverflow: http://stackoverflow.com/questions/6560680/pyaudio-memory-error

    
def getAudio(self):
        
"""get a single buffer size worth of audio."""
        
audioString=self.inStream.read(self.BUFFERSIZE)
        return 
numpy.fromstring(audioString,dtype=numpy.int16)

    
def record(self,forever=True):
        
"""record secToRecord seconds of audio."""
        
while True:
            if 
self.threadsDieNow: break
            for 
i in range(self.buffersToRecord):
                
self.audio[i*self.BUFFERSIZE:(i+1)*self.BUFFERSIZE]=self.getAudio()
            
self.newAudio=True
            
if forever==False: break

    
def continuousStart(self):
        
"""CALL THIS to start running forever."""
        
self.threading.Thread(target=self.record)
        
self.t.start()

    
def continuousEnd(self):
        
"""shut down continuous recording."""
        
self.threadsDieNow=True

class visu(BaseStripAnim):

    
def __init__(selfled):
        
super(visuself).__init__(led)
        
self.rec Recorder()
        
self.rec.setup()
        
self.rec.continuousStart()
        
#self.colors = [colors.hue_helper(y, self.height, 0) for y in range(self.height)]

    
def endRecord(self):
        
self.rec.continuousEnd()

    
def step(selfamt 1):
    
self._led.all_off()
        
rms audioop.rms(self.rec.audio2)
        
#FIXME: 1501.1 should be adjusted (for my current mic) or good MATH is necessary
        
calc rms 1501.0 
        rnd 
round(calc1
        print 
rms
        
print calc
        
print rnd 
        noisiness 
rnd  
        
#FIXME: prevent to high/low values
        
if noisiness 1.0noisiness 1.0
        
if noisiness 0noisiness 0.0 
        h 
int(255*noisiness)
        
colors.hue2rgb(h#our built-in hue/hsv functions are MUCH faster.
        
led.fill(cstart=0end=int(led.numLEDs*noisiness)) 
        
#self._step = 0

        
self._step += amt


    
# #Load driver for your hardware, visualizer just for example
from bibliopixel.drivers.visualizer import DriverVisualizer
from bibliopixel
.drivers.WS2801 import DriverWS2801
from bibliopixel
.drivers.network import DriverNetwork
from bibliopixel
.led import *

import bibliopixel.gamma as gamma

parser 
argparse.ArgumentParser()
group parser.add_mutually_exclusive_group()
parser.add_argument("--visualizer"help="use the visualization driver"action="store_true")
args parser.parse_args()

if 
args.visualizer:
     
driver DriverVisualizer(num=62pixelSize=20)
     
led LEDStrip(driver)
else:
     
driver DriverWS2801(62)
     
led LEDStrip(driver)


led.setMasterBrightness(255)
import bibliopixel.log as log
#log.setLogLevel(log.DEBUG)


try:
    
anim visu(led)
    
anim.run(fps=30)
    
#anim.run(fps=20, max_steps = 20 * 60) # 1 minute animation
except KeyboardInterrupt:
    
pass

anim
.endRecord()
led.all_off()
led.update() 
#4
Ok, for the special AUDIO analysis for now I will switch to existing ambi-tv (with audiograbber support) https://github.com/xSnowHeadx/ambi-tv
Directly working with my current cheap mic very well and very configurable also while running and from my Neutrino SAT receiver ;-)

But for the other effects like LarsonScanner I will stay with Bibliopixel ...


Forum Jump:


Users browsing this thread: 1 Guest(s)