In The Mix

As a SharePoint architect I have the business behind me and the Developers and IT Pro on my shoulders.

Recovering Video Process after a Camera failure December 28, 2013

Filed under: Uncategorized — fmuntean @ 10:38 am

Last week my camera failed during recording of my daughter Christmas play. I used my old Kodak due to the low light conditions as it has a bigger CCD sensor than the other Samsung video recorder I have.

Following is the story of trying to recover the recording and lessons learned along the way.

First once I got home I put the SD Card inside the computer to see the level of the damage and guess what; The SD card showed a single MOV file which was expected, as I emptied the card just before the recording, however the files size was 0 bytes and could not be copied over.

clip_image001

Next I checked to see the disk properties which showed approximately 800MB in use.

clip_image002

The fact that I emptied the card and that it showed usage gave me some hopes on recovering the video, so the next step would be to start recovering the file from the card. Before starting to mess with it I did a backup of the card using a port of the very useful Disk Dupe tool from Linux. You can get the tool here: http://www.chrysocome.net/dd

Using the tool I saved the whole content of the SD card to a file which I used to extract the video file.

 

The next few minutes were spend trying to run different tools like scandisk and other file recovery tools to see if I can recover the file, but with no such luck. However some of the tools were able to recover two old videos and some picture files that were located in the free space.

 

Now with a backup file of the SD Card and few information points like: the partition format being FAT16, and that the expected file size being around 800MB, and also that by cleaning the card before recording most likely the file is at the beginning and all the file parts are following in order I needed to locate and extract the MOV file.

I have recovered files from Disk Partitions manually before and if you want to learn more about the FAT tables here are some links:

http://en.wikipedia.org/wiki/File_Allocation_Table

http://www.tavi.co.uk/phobos/fat.html

First I needed to learn more about the MOV file format. So I have put another empty SD card inside the camera and recorded a movie for 1 minute then another one for 30 minutes to use as a baseline. This was to make sure the videos are recorded with the exact same settings. If your camera is unusable you can use any old files already recorded by the same camera and not from any other one.

 

Next I fire up the browser and start learning about the MOV file format which is documented by Apple here:

https://developer.apple.com/library/mac/documentation/QuickTime/QTFF/qtff.pdf

and other web links like:

http://wiki.multimedia.cx/index.php?title=QuickTime_container&printable=yes

http://xhelmboyx.tripod.com/formats/qtm-layout.txt

In summary the file is composed of blocks/Atoms with a header and data payload. The header would contain an identifier and the size of the data it contains and also the data could be made of other blocks/Atoms as the format is hierarchical. Based on that specifications fired up my Visual Studio and wrote a file parser to see what Atoms are used by the two sample files I saved.

Using the self-created tool found that both movies contain three Atoms in the following order: SKIP, MDAT, and MOOV

 

The documentation says that SKIP is to be skipped and it contained some information on the camera and the KODAK company in other words quick useless for my needs so far but proved invaluable during the file extraction. MDAT however contains all the raw VIDEO and AUDIO frames for the movie. The MOOV atom would be composed of other Atoms that would help in describing the video and audio frames inside the MDAT Atom and also attach metadata to the video.

Good, so now I know that I am looking for those three Atoms IDs inside the SD Card saved file. So I fired the Visual Studio again and build a tool to extract locations of those Atoms and their content. Interesting enough was able to recover the two movies from the empty space the same like the other recovery tools and found a SKIP Atom somewhere at the beginning. I guess that I forgot to mention that before trying this approach I also tried to read the FAT and Directory structure to see if I can identify my file whereabouts from them but with no luck as the structure was corrupted by the camera failure.

So now I know the file must start where the first SKIP Atom was found but had no clue where would end. So it would be safe to assume that should finish before the next SKIP was found and I saved that block to a separate file for further analysis and recovery. I ended up with a 830MB file which is close enough to the allocated space reported above.

Investigating the new file found that the SKIP Atom was intact and exactly the same like the one in the two base files, however the MDAT and MOOV tags were nowhere to be found. In the place where the MDAT header was supposed to be the space was filled with zeroes. My guess is that would have been filled later once the size would have been known.

Next I tried to identify if there is any chance to identify the MOOV atom by looking for any tags that would be used inside it and got no luck. This is due that that this Atom is the last one written even if it is the most important one (I will explain that later).

In order to make the file usable in some way I did created a MDAT Atom header manually using a hex editor using the file information that would match the remaining size then appended at the end of the file the MOOV Atom extracted as is from the 30min baseline video.

Let’s see what I ended up with. Played this files using VLC video Player and voilĂ  the file started to play with distorted video and white noise instead of audio and from time to time some kind of real audio could be heard for less than a second. This tells me that the raw data is there and just needed to be interpreted "correctly", so I went to the next level understanding the data inside the MDAT Atom.

From Apple’s documentation found that MDAT Atom is nothing else than a header with any binary payload in any format, however the MOOV atom is used to process data inside the MDAT atom so I went learning more about the MOOV atom. As you remember I just copied it from the baseline video with no modifications whatsoever.

Using the MP4Parser ISO Viewer (https://code.google.com/p/mp4parser/ ) tool investigated the data inside MOOV and extracted the following info: resolution, frame rate, video format, audio encoding, and audio sample rate.

In short the MOOV contains two tracks and their respective info and data on how to extract it from MDAT Atom. This MOOV atom is from the other file so the information inside it on how to extract the video and audio frames is not very useful to me so far. So, how could the video be played if the data to extract it is wrong? From the data info extracted I have found that both video and audio frames are not constant in size which means that there must be something inside the MDAT Atom to at least identify the video frames and hopefully the audio ones if lucky.

 

Went again deeper into the next level and learned more about the MP4 format. As most of the first hand specifications ISO 14496 are not available for free I had to resort to second hand information some of which is listed in here:

http://thompsonng.blogspot.com/2010/11/mp4-file-format.html

http://thompsonng.blogspot.com/2010/11/mp4-file-format-part-2.html

http://www.jiscdigitalmedia.ac.uk/guide/aac-audio-and-the-mp4-media-format

http://processors.wiki.ti.com/index.php/Extracting_MPEG-4_Elementary_Stream_from_MP4_Container

http://en.wikipedia.org/wiki/Packetized_elementary_stream

http://dspace.uta.edu/bitstream/handle/10106/485/umi-uta-1675.pdf

http://www-ee.uta.edu/Dip/Courses/EE5359/Thesis%20Project%20table%20docs/siddarajuproject.pdf

http://dvd.sourceforge.net/dvdinfo/mpeghdrs.html

In short for our purpose so far any video frame starts with the following 3 byte signature: 0x00 0x00 0x01 known as Start Code

As the audio and video frames are interleaved the block of data between two Start Codes contains either one video frame or one video frame + one audio frame.

Firing again Visual Studio and build a small tool to extract a certain size block from before each Start Code to identify which ones are audio. The block Size I choose was 1600 audio samples. This value was extracted from the MOOV audio track sample information Atoms.

 

And now is time to learn about the audio format which from the MOOV Atom seems to be coded as u-Law, Stereo, 16Khz sample rate, and 16 bit sample size. The u-Law compression allows for a 2:1 compression rate by reducing a 16 bit sample to a 8 bit encoded number.

Some links with information on the u-Law encoding are provided bellow:

http://hazelware.luggle.com/tutorials/mulawcompression.html

http://en.wikipedia.org/wiki/%CE%9C-law_algorithm (once the page loads to see the text you have to select all: CTRL+A)

What it boils down is that decoding u-Law is very simple and all you need is a lookup table.

static short MuLawDecompressTable[256] =

{

     -32124,-31100,-30076,-29052,-28028,-27004,-25980,-24956,

     -23932,-22908,-21884,-20860,-19836,-18812,-17788,-16764,

     -15996,-15484,-14972,-14460,-13948,-13436,-12924,-12412,

     -11900,-11388,-10876,-10364, -9852, -9340, -8828, -8316,

      -7932, -7676, -7420, -7164, -6908, -6652, -6396, -6140,

      -5884, -5628, -5372, -5116, -4860, -4604, -4348, -4092,

      -3900, -3772, -3644, -3516, -3388, -3260, -3132, -3004,

      -2876, -2748, -2620, -2492, -2364, -2236, -2108, -1980,

      -1884, -1820, -1756, -1692, -1628, -1564, -1500, -1436,

      -1372, -1308, -1244, -1180, -1116, -1052,  -988,  -924,

       -876,  -844,  -812,  -780,  -748,  -716,  -684,  -652,

       -620,  -588,  -556,  -524,  -492,  -460,  -428,  -396,

       -372,  -356,  -340,  -324,  -308,  -292,  -276,  -260,

       -244,  -228,  -212,  -196,  -180,  -164,  -148,  -132,

       -120,  -112,  -104,   -96,   -88,   -80,   -72,   -64,

        -56,   -48,   -40,   -32,   -24,   -16,    -8,     -1,

      32124, 31100, 30076, 29052, 28028, 27004, 25980, 24956,

      23932, 22908, 21884, 20860, 19836, 18812, 17788, 16764,

      15996, 15484, 14972, 14460, 13948, 13436, 12924, 12412,

      11900, 11388, 10876, 10364,  9852,  9340,  8828,  8316,

       7932,  7676,  7420,  7164,  6908,  6652,  6396,  6140,

       5884,  5628,  5372,  5116,  4860,  4604,  4348,  4092,

       3900,  3772,  3644,  3516,  3388,  3260,  3132,  3004,

       2876,  2748,  2620,  2492,  2364,  2236,  2108,  1980,

       1884,  1820,  1756,  1692,  1628,  1564,  1500,  1436,

       1372,  1308,  1244,  1180,  1116,  1052,   988,   924,

        876,   844,   812,   780,   748,   716,   684,   652,

        620,   588,   556,   524,   492,   460,   428,   396,

        372,   356,   340,   324,   308,   292,   276,   260,

        244,   228,   212,   196,   180,   164,   148,   132,

        120,   112,   104,    96,    88,    80,    72,    64,

         56,    48,    40,    32,    24,    16,     8,     0

};

From <http://hazelware.luggle.com/tutorials/mulawcompression.html>

Also learned how to create wav files using the decoded PCM format so I can listen to the extracted audio. The links with info on the wav file format are provided bellow:

http://www.aelius.com/njh/wavemetatools/doc/riffmci.pdf

https://ccrma.stanford.edu/courses/422/projects/WaveFormat/

Now is time to start Visual Studio again an create a visual tool to see and hear the possible "sound" blocks saved before.

The length of the sound per block is under the 1/10 of a second, but you will be amazed of the best computer (our brain) capabilities of identifying random data versus real audio 🙂

The ear will get you only so far and would only help identifying which blocks contain audio samples and which ones are pure white sounds aka part of the video track at the expenses of hearing a lot of noise very loud. To aid into finding approximately how to split the blocks themselves and mark the beginning of the sound (this can only be done approximately as there is no identifier or header before the audio frame) plotting the decoded PCM wave can help a lot. Following are some of the visual samples:

clip_image003

clip_image004

clip_image005

clip_image006

clip_image007

Interesting enough the second picture and all the noise from the video parts of the blocks seems to have a tendency of having many values decoded around the 0 value even if my expectations where to see more randomness. However that came as a aid in detecting where to visually split the file, especially in the many occasions where the line between video and audio is not very clear (Ex: loud high frequency audio).

Some are easier to identify than others but with this method there is no way you can split the audio from video to the exact byte thus resulting in some pings (every 10ms or so) in the audio. That being said is still way better than what the media player was able to get using the "invalid" MOOV audio track. The resulted audio after trimming the beginning (remember the end is always correct as we know that the next bytes after that are the starting of the following video frame) and joining the resulted PCM samples into a wav file is good enough, especially after running some filters, and audio optimizations, and noise clean up using the Audacity tool (http://audacity.sourceforge.net/)

In my case it seems that mostly one every 3 frames contains audio samples. This means that 2/3 of the video frames could be reconstructed correctly in the MOOV track sample information and the rest could be reconstructed approximately given the marked splits. As the video is 30 minutes long and have extracted over 45K blocks with possible audio this will take a while to process.

My plan is to only process the first 5 minutes and see if the video quality improves.

 

Next steps would be to start processing the blocks that contain both video and audio deep enough to find the exact size of the video portion thus fully restoring the video. This is possible in theory however is impractical given the fact that many other parents have taped the event thus I can get an alternate video of the event.

As during my investigations did not find many articles about recovering video process. I hope that this post will help people understand the process of restoring a corrupted video from the camera.

 

I conclusion restoring is possible however if you know that somebody else has recorded the event the cheaper/easiest way is to ask that person for a copy.

In general restoration implies that some data has been lost resulting in loss of quality.

If you want to get corrupted video is easy: just power down the camera while recording. Mine had the button broken just enough that when somebody tripped on the camera support you know what happened next if you got this far into reading 🙂

 

PS: If there only was a header around the audio portion of the block thus making the restoration easier and maybe better. But hey, the person who invented the format never had to support it I guess by restoring corrupted data from his/her camera.

 

SharePoint for Project Managers March 11, 2011

Filed under: Uncategorized — fmuntean @ 8:23 pm

We are currently using SharePoint for many things like collaboration, portal, custom solutions, requirements gathering and project management.

As our current usage of SharePoint for project management is not public and process heavy I looked around and find a good blog about how to use SharePoint for Project Management. The following posts are not only doing that, but actually they provide a great step by step introduction to the world of PMs.

So if you are a beginner and want to learn more about PM or you are an expert in using Microsoft Project  but want to learn how SharePoint can help you to make others understand the complex plan that you put together follow the recipe for a simple and transparent way of managing projects.

 

PM Guide – (1) Initiate the Project

  • (i) Get the Project Approved and Resources Allocated
  • (ii) Decide the Project Management Process
  • (iii) Create a Collaborative Project Site

PM Guide – (2) Plan and Setup the Project

  • (i) Plan the Project
  • (ii) Desk Check the Project Plan
  • (iii) Notify the Team of their Responsibilities

PM Guide – (3) Work on the Project

  • (i) Find Work
  • (ii) Do Work
  • (iii) Update Progress on work

PM Guide – (4) Track and Re-Plan the Project (continuously until project closure)

  • (i) Check and understand the project’s progress
  • (ii) Find and Manage Exceptions (e.g. issues, risks and change requests)
  • (iii) Re-Plan the project

PM Guide – (5) Close the Project

  • (i) Run Project Post-Mortem and Track Lessons Learnt
  • (ii) Close out the Project site
  • (iii) Capture any useful modifications made to the project site for use on future projects

And he does not stop here but continues with the guide into the depths of Project Management:

 

PM Guide – (6) Project Management and Your Leadership Style

They even provide you with the templates (for a price) or you can actually build your own.

 

PM Guide – (7) Collaborative Project Management Sites

PM Guide – (8) Exercise – Build Your Own Project Management Approach

 

Hope this will help people understand how they can use SharePoint for Project Management as for sure if you read this blog you already have SharePoint.

Not sure if you are aware but if you one of the very experienced Project Managers out there Microsoft Project (Server) does integrates nicely with SharePoint Server.

 

I recommend SharePoint for project management for one single reason(ok two): Visibility & Transparency.

 

Want to dig deeper into this world?

Book cover of SharePoint for Project Management

SharePoint for Project Management

How to Create a Project Management Information System (PMIS) with SharePoint

By Dux Raymond Sy
Publisher: O’Reilly Media
Released: October 2008
Pages: 256
 

Moving your boot from VHD to a bigger drive June 27, 2010

Filed under: Uncategorized — fmuntean @ 4:46 pm
Tags:

The most important thing when using boot to VHD is that all your system files are located into a single VHD file on your physical HDD. This will let us easily upgrade the HDD by copying the VHD file from one to another then fix the boot for the new HDD.

Following are the detailed steps to accomplish this:

Attach the new HDD as an external one or as a second one if your computer permits.

Server Manager -> Storage -> Disk Management

New Simple Volume (format NFTS, quick) mount as f:

Mark Partition as Active

Command prompt

Bcdboot c:\windows /s f:

Bcdedit /export f:\backup.bcd

Reboot

Boot from Win7 or Windows 2008R2 DVD

After "windows loading files…" message when the initial setup screen is displayed press SHIFT+F10 for command prompt

Copy vhd file from old disk to new one (you need to find the drive letters again as they might not match)

Usually

C: is the internal HDD

D: is the DVD

E: is the external HDD

Xcopy /J c:\win2k8R2.vhd e:\ (wait… it will take a while)

NOTE: Or you can copy now the entire hard drive including the hidden folders using xcopy /H

I am only copy the vhd so I can get my computer up and running faster then I will copy the rest of the file later in background.

Close command prompt -> cancel installation (ALT+F4) -> shutdown the machine

Remove old hard drive and replace with the new one.

Start computer.

Boot from install DVD again and start the command prompt

Bcdedit /import c:\backup.bcd

Fix the entry

Bcdedit /set {default} device vhd=[LOCATE]\win2k8R2.vhd

Bcdedit /set {default} osdevice vhd=[LOCATE]\win2k8R2.vhd

Reboot and

Voala!

 

Why Boot from VHD is not virtualization November 25, 2009

Filed under: Uncategorized — fmuntean @ 11:59 pm

Because of the marketing and the current buzz about virtualization many people think that Boot from VHD means that virtualization is used and thus a performance hit will be on your system.

Let me explain how Boot from VHD really work:

Any operating system has a boot loader that will need to recognize the file system and start loading the bits for your OS. This is the first software that is starting after the BIOS is completed.

The boot loader uses either BIOS calls or special drivers to load all the files from the OS and what Microsoft have done here is to include not only support for NTFS file system but for the VHD format too. So the boot loader can locate the VHD file then read the data from inside as needed.

Once the OS is started another driver, this time a windows driver, is used to continue and read the files and Microsoft have implement here a special disk driver that recognizes the VHD file format.

VHDDriverSo there is no virtualization involved here, all is involved is another layer of redirection until getting to the real data on the disk. What the driver does is to locate the location of the file inside the  VHD file then map that location on the host NTFS file system to be able to read the data for the real HDD. Now there is know for years already that there is actually even another redirection inside the HDD itself too, and the reason there is to be able to hide the bad sectors by remapping them.

There are few types of VHD formats:

  • Fixed VHD: This one allocates the entire size on the physical HDD and is the fastest. I would recommend to always use this type when
  • Dynamic VHD: The space gets allocated on demand however when booting from VHD is enabled the boot loader will enlarge this to the maximum specified size thus making a little bit slower because of the time that it takes to enlarge the file during boot and then resize it back when shutting down. The other problem with this type when used for boot from VHD is that  if there is not enough space you get an error and you can’t boot from it until you make the required space available. There is no much gain from using this type to boot from and lots of pain so I recommend using this type only for Hyper-V VMs.
  • Differential VHD: This type introduce yet another  redirection to a base VHD as it only contains the modified bits from the parent. This is very useful if you have multiple VHD that have small differences between them. One problem with it is that you can’t just path the parent VHD and have it propagated to all the rest. The parent VHD  should be treated as a read only and replacing it will require to replace of the differential ones too.   I don’t see much use for it in our situation as usually your VHDs are very different from each other, however in a QA environment there is much to gain from it.

In conclusion Boot from VHD is a great technology that was finally implemented in Windows (this is available in Linux for decades now). The main performance impact is actually because of the old fragmentation issue both inside VHD file and outside on the host NFTS file system. My recommendation would be to use the fixed size VHD and don’t try to gain a little bit of space by using Dynamic or Differential VHD as it does not worth the headache.

 

Windows 7 boot from VHD October 10, 2009

Filed under: Uncategorized — fmuntean @ 2:12 pm

I have start playing with the new feature (boot from VHD) available in Windows 7 and windows server 2008 R2.

Until now I always created a separate partition for the operating system, now by using the boot from VHD feature I can have a single partition and have the OS installed inside a single VHD file.

There are some things to be aware of when doing a setup like this:

  1. Be aware when using Dynamic VHDs that when the computer boots up the VHD gets expanded to the maximum size so if you created a dynamic VHD and specified 127Gb as the maximum size you better have that much space free on your hard drive or the operating system will fail with an ugly blue screen. My recommendation would be to use fixed size VHDs.
  2. There is no easy way to move from physical to boot to VHD due to the expansion of the VHD so trying to create a  VHD from a big partition with a lot of free space and then store it on that partition will fail due the VHD being expanded just a little bit more than the space available due to header information in the file.
  3. The way that I recommend doing a migration from physical to boot from VHD is to use Disk Management –> Shrink Volume before migration and then just do a Windows Backup to VHD (available starting with Windows Vista) then expand the partition back again using Disk Management thus giving you enough space for the VHD.
  4. If you try to reuse the same VHD images to run them under Hyper-V you are out of luck as the files are exclusive locked by the system and they have the physical drivers installed.
  5. I was happily surprised to see that the Swap file get stored to the main hard drive next to the VHD instead of being stored inside. This was a great improvement as it does speed up the OS, and not even mentioned during the launch event either.

In the end boot to VHD is great because it only virtualizes the HDD access giving me the possibility to have both Windows 7 and Windows 2008 R2 on the same machine. Who knows maybe in the future I will be using boot to VHD for my development machines instead of Hyper-V.

 

RePDC 2008 October 15, 2008

Filed under: Uncategorized — fmuntean @ 9:59 pm

Today Bob Familiar announced at MSDN Road Show in Waltham that on January 22nd 2009 the best of PDC 2008 will be arriving in Boston. So if you are like me and can’t make it there this year make sure that you mark the calendar for this great event.

Thank you Bob for your hard work on putting this together and make it happens. I really enjoyed the ReMIX07 and I am looking forward to it.

 

My first CodePlex Project October 3, 2008

Filed under: Uncategorized — fmuntean @ 8:54 pm

I am already using this application for a while now.

I just released a Sharepoint Site Collection Backup utility on Codeplex.

Please take a look at: http://www.codeplex.com/SPSitesBackup

I find this utility very usefull if you have multiple site collections in your farm.

You can use it for MySites to keep a backup of each separatelly. and then when you need to restore one you can restore just the one for the user that requested it. You can even use this for the outboarding process (just keep a copy of the user personal site and delete the site from the Farm).

However I want to stress that you will still need to have a full farm backup because this utility is not a replacement for that.

If your application contains multiple projects each on a separate site collection (my recomandation) then you can use this utility to archive them separatelly.

If you find this usefull please leave me comment and tell me how you are using it.