So, on the technical side, I still don’t have this working completely yet. See the thoughts at the end of the post for my questions and trouble.
The 16Gb SD Card works wonders, I can see the images. It is just to think of how it could work better.
Experimentation
With the SD Card upgraded I could begin dabbling in the art of using the Pi NoIR V2 camera we have. With everything set up, though not up to date, I dropped instantly into the python code that’s documented.
from time import sleep
from picamera import PiCamera
try:
camera = PiCamera()
camera.resolution = (3280,2464)
camera.start_preview()
sleep(2)
for x in range(500):
camera.capture('/home/pi/Desktop/pics/' + str(x) + '.jpg'
except:
print('oops')
Since it is just a normal python script, we can connect it to an HDMI screen, then run it. You will immediately notice the “problem” I have:
Since I’ve never used the Pi’s cameras ever before we experiment in different ways for ourselves.
raspistill -t 3000000 -tl 500 -e jpg --burst -w 3280 -h 2464-co -5 -sh 100 --ev 6 --exposure sports --imxfx colourbalance
Swapping over to the command line, we can look at a different resolution, exposure method, and have slight adjustments.
This, unfortunately, requires more work. The images give off a pink hue. Dropping back into the documentation I skipped a step, or two, so I figure I should finish a few of them.
sudo apt update
sudo apt full-upgrade
The thing is, people have reported the config, packages, and setup, may not contain the latest stable libraries, or settings, on the OS used on your Pi (latest Raspbian for mine).
Similarly, one report I saw suggests yet another setting change:
vcdbg set awb_mode 0
Having done all this the two images (at the start where I’m sitting at the screen to run it) are as follows:
As you can tell, I still seem to have color balancing issues. I’m not sure what to change or do, I would love to get the quality of the photos in what I saw the NoIR V2 could create:
Take note, the ambulance to the right of the image testing all the cameras has white on the sides and roof. The quick Python test and quick shell test, just above, show they keep the same issue. Somehow I am stuck with pink images.
What I swap to is simple:
raspistill -t 3000000 -tl 500 -e jpg --burst -w 1920 -h 1080 -co -5 -sh 100 --ev 6 --exposure sports --imxfx colourbalance --awb greyworld -o /home/pi/Desktop/pics/image%04d.jpg
This command runs in the terminal, the -t is a timeout in milliseconds, then -tl is the time-lapse command. Essentially, 3000000ms converts to 50min run time. The 500ms converts to 2 images per second.
Rewire The Python Script
First, on the desktop, we create the file launcher.sh this will run the python for us.
#!/bin/sh
# launcher.sh
# navigate to home -> desktop -> then execture python -> go home
cd /
cd home/pi/Desktop
sudo python camera.py
cd /
We then need to chmod 755 launcher.sh
Take note, the python script is different below. To test it we can type sh launcher.sh
So, this means we need to add it to our crontab. With sudo crontab -e we can enter the line
@reboot sh /home/pi/Desktop/launcher.sh > /home/pi/Desktop/log 2>&1
This will log the use and output from this “job”. Essentially, whenever the Raspberry Pi reboots, it should run the python script.
Next, obviously, would be adjusting the python script that we run.
import os
import subprocess
if not os.path.isfile('/home/pi/Desktop/pics/image0001.jpg'):
subprocess.call('sudo raspistill -t 3000000 -tl 500 -e jpg --burst -w 1920 -h 1080 -co -5 -sh 100 --ev 6 --exposure sports --imxfx colourbalance --awb greyworld -o /home/pi/Desktop/pics/image%04d.jpg', shell=True)
You can no doubt tell, this will launch every time we turn the Pi on. It goes if there isn’t the first image file we will take the images using raspistill. Simple, effective, sure it doesn’t do the motion detection for us, it would just be taking a time-lapse for us.
Essentially, when we want to use it we turn the Pi on, it launches, and it checks “is there one or more photos?” If there is a photo, we don’t do the time-lapse. This way we, unfortunately, manually would have to work with the Pi to copy the photos off. We will just step into that in the future.
I ended up finding my logic is still flawed. It seems to take a photo too often, and it has the color in the image that was expected before. I’ll have to adjust the python script. I most likely forgot, or accidentally typed, something incorrect into the script.
It has become:
import os
import subprocess
import time
a = False
if not (os.path.isfile('/home/pi/Desktop/pics/image 0.jpg')):
subprocess.call('sudo raspistill -t 600000 -tl 1000 -e jpg --burst -w 1920 -h 1080 --awb greyworld -o /home/pi/Desktop/pics/image%40d.jpg', shell=True)
a = True
while a:
time.sleep(5)
I shall be waiting for the opportune moment to show off photos so you could see it is way better already.
As you will note, this takes 601 pictures over 10 minutes. I got my measurements and such in all the previous attempts incorrect. I also need to experiment more with the image colors. There is always more, it is just where we will leave it today. I can take 10-minute time-lapses now.
There is an interesting thought for what will come next, the python script in sleep would count the files in the img folder and when divisible, rounded down, by 601 it would launch the next camera. Perhaps to ‘image.1….’, then ‘image.2…’ and so on? I just need to think about that.
Thoughts
It is definitely troubling me, I cannot seem to fix the Pi NoIR camera from taking pink images. This being with, or without, the blue filter that came with the camera. Sure, going to greyworld brings more color to the images, it still feels off.
I would love to get the full NoIR view experience in the images it takes, it would roughly take two 1.3mb images a second for 1h40m. Roughly 260mb for 200 photos in a shoot. I should definitely swap it to use a button press to make a new folder to use for the photoshoots and roughly could then take 35 x 1h40m photo sessions. With the battery pack which I use it could do 2 of the 1h40m sessions in a day. It just can only do one for now.
The next step will use C# code to process all the images to look for motion. I have ideas for how to implement it, the experimentation will be rather slow though.