What is running on the camera server? (part 3- a big database)

You’ve dropped into a multi-post writeup about an IP camera system I put together for home use. This post describes the 10 terabyte database that stores 1 week of video, recorded 24/7 from 8 cameras, along with motion events.

As discussed in other posts, the recorded video is split into segments at keyframes. Each video segment is stored in a postgresql database “large object.” When creating a pgsql large object, an identifier is created which is stored in another table so that the large object can be retrieved later. This table is called “mpeg_segments” and the important fields are a timestamp, for when the segment began, the large object identifier, and the camera_id so that video can be separated from multiple cameras.

One of the problems with continuous video recording systems is that the video data is a firehose that never stops. To keep the disk containing the database from filling up, I installed the partman package. Partman partitions a pgsql table by date and can be configured to drop old tables. Each night after partman runs on my database and the oldest table is dropped, a cron job kicks off vacuumlo which deletes the orphaned large objects containing the video segments that were referenced by the deleted table.

Here’s a simplified diagram of the most important tables in the database schema and how they are used:

The events table is used to collect event notifications sent by the cameras when they detect motion.

Leave a Reply

Your email address will not be published. Required fields are marked *