Thursday, February 7, 2019

Regarding Google's recent policy changes

Since I opened this blog site in 2013, I have been feeling thrilled to receive great attention from hackers and DIYers. There had been many great discussions in the comment sections. Personally, I have learned a lot, and I think many visitors benefited from the comments as well!

It is unfortunate that Google is going to permanently close its Google+ service, which is having a painful impact on this blog site.

Yesterday, I was astonished to find that all the comments under my blog posts have been deleted because they were all Google+ comments. I looked at the recent policy change and found the following message:

"Following the announcement of Google+ API deprecation scheduled for March 2019, a number of changes will be made to Blogger’s Google+ integration on 4 February 2019. 

Google+ Comments: Support for Google+ comments will be turned down, and all blogs using Google+ comments will be reverted back to using Blogger comments. Unfortunately, comments posted as Google+ comments cannot be migrated to Blogger and will no longer appear on your blog. "

So basically, all the old comments are removed!

It is definitely a big loss to me and visitors to this blog. The good thing is that Google is not closing blogspot, and people can still leave comments as a Google user, not a Google+ user.

I have been trying to download my Google+ archive, which does include some old comments. But they are very unstructured. If anyone has found a solution to add the old comments back, please leave a comment!

Tuesday, August 28, 2018

Deep Learning with Raspberry Pi -- Real-time object detection with YOLO v3 Tiny! [updated on Dec 19 2018, detailed instruction included]

A quick note on Dec 18 2018:
Since I posted this article late Aug, I have been inquired many times on the detailed instruction and also the python wrapper. Having been really busy in the last several months, I finally found some spare time completing this blog with detailed instruction! All the information can be found in my GitHub repos which was forked from shizukachan/darknet-nnpack. I have modified the Makefile, added the two Python nonblocking wrapper, and made some other minor modification. It should "almost" work out of the box!
https://github.com/zxzhaixiang/darknet-nnpack/blob/yolov3/rpi_video.py

Here goes the updated article

I am a big fan of Yolo (You Only Look Once, Yolo website). Redmon & Farhadi's famous Yolo series work had big impacts on the deep learning society. BTW, their recent "paper" (Yolo v3: an incremental Improvement) is an interesting read as well.

So, what is Yolo? Yolo is a cutting-edge object detection algorithm, i.e., it detects objects from images. Traditionally people used moving windows to scan an image, and then try to recognize each snapshot in every possible window locations. This method is of course very time consuming because there are many different ways to place the window, and many computations need to be done repeatedly. Yolo, standing for "You Only Look Once" (not You Only Live Once), smartly avoids those heavy computations by directly predicting object category and their bounding boxes simultaneously.

YoloV3 is one of the latest updates of Yolo algorithm. The biggest change is that YoloV3 now uses only convolutional layers and no more fully-connected layer. Don't let the technical term scare you away! What does this implies is that YoloV3 does not care about the input image size anymore! As long as the height and width are integer times 32 (such as 224x224, 288x288, 608x288, etc), YoloV3 will work fine! Another major improvement of YoloV3 is that it gives predictions in the intermediate layers as well. Again, what does it mean, is that Yolo3 now does a better job predicting small objects than its previous version!

I will have to skip the technical detail here because the paper explained everything. The only thing you need to know is that Yolo is lightweight and fast and decently accurate. It is so lightweight and fast that it can even be used on Raspberry Pi, a single-board computer with smart-phone-grade CPU and limited RAM and no CUDA GPU, to run object detection in real-time! And, it is also convenient because the authors had provided configuration files and weights trained on COCO dataset. So no need to train your own model if you are only interested to detect common objects.


Although Yolo is super efficient, it still requires quite a lot of computation. The original YoloV3, which was written with a C++ library called Darknet by the same authors, will report "segmentation fault" on Raspberry Pi v3 model B+ because Raspberry Pi simply cannot provide enough memory to load the weight. YoloV3-tiny version, however, can be run on RPI 3, very slowly.

Again, I wasn't able to run YoloV3 full version on Pi 3. I think it wouldn't be possible to do so considering the large memory requirement by YoloV3. This article is all about implementing YoloV3-Tiny on Raspberry Pi Model 3B!

Quite a few steps still have to be done to speed up yolov3-tiny on the pi:
1. Install NNPACK, an acceleration library for the neural network to run on multi-core CPU
2. Add some special configuration to the Makefile to compile the Darknet Yolo source code on Cortex CPU and NNPACK optimization
3. Either install opencv C++ (big pain on raspberry pi) or write some python code to wrap darknet. I believe Yolo comes with a python wrapper but I haven't had a chance to test it on RPI.
4. Download Yolov3-tiny.cfg and Yolov3-tiny.weights. Run Darknet with Yolo tiny version (not full version)!

Sounds complicated? Luckily digitalbrain79 (not me) had already figured it out (https://github.com/digitalbrain79/darknet-nnpack). I had more luck with Shizukachan's fork version. I even made a few more changes to make it easier to follow:

Step 0: prepare Python and Pi Camera

Log in to Raspberry Pi using SSH or directly in terminal.
Make sure pip-install is included (it should come together with Debian
sudo apt-get install python-pip
Install OpenCV. The simplest way on RPI is as follows (do not build from source!):
sudo apt-get install python-opencv
Enable pi camera
sudo raspi-config
Go to Interfacing Options, and enable P1/Camera
You will have to reboot the pi to be able to use the camera
A few additional words here. In the advanced option of raspi-config, you can adjust the memory split between CPU and GPU. Although we would like to allocate more ram to CPU so that the pi can load a larger model, you will want to allocate at least 64MB to GPU as the camera module would require it.

Step 1: Install NNPACK

NNPACK was used to optimize Darknet without using a GPU. It is useful for embedded devices using ARM CPUs.
Idein's qmkl is also used to accelerate the SGEMM using the GPU. This is slower than NNPACK on NEON-capable devices and primarily useful for ARM CPUs without NEON.
The NNPACK implementation in Darknet was improved to use transform-based convolution computation, allowing for 40%+ faster inference performance on non-initial frames. This is most useful for repeated inferences, ie. video, or if Darknet is left open to continue processing input instead of allowed to terminate after processing input.

Install Ninja (building tool)

Install PeachPy and confu
sudo pip install --upgrade git+https://github.com/Maratyszcza/PeachPy
sudo pip install --upgrade git+https://github.com/Maratyszcza/confu
Install Ninja
git clone https://github.com/ninja-build/ninja.git
cd ninja
git checkout release
./configure.py --bootstrap
export NINJA_PATH=$PWD
cd
Install clang (I'm not sure why we need this, NNPACK doesn't use it unless you specifically target it).
sudo apt-get install clang

Install NNPACK

Install modified NNPACK
git clone https://github.com/shizukachan/NNPACK
cd NNPACK
confu setup
python ./configure.py --backend auto
If you are compiling for the Pi Zero, change the last line to python ./configure.py --backend scalar
You can skip the following several lines from the original darknet-nnpack repos. I found them not very necessary (or maybe I missed something)
It's also recommended to examine and edit https://github.com/digitalbrain79/NNPACK-darknet/blob/master/src/init.c#L215 to match your CPU architecture if you're on ARM, as the cache size detection code only works on x86.
Since none of the ARM CPUs have a L3, it's recommended to set L3 = L2 and set inclusive=false. This should lead to the L2 size being set equal to the L3 size.
Ironically, after some trial and error, I've found that setting L3 to an arbitrary 2MB seems to work pretty well.
Build NNPACK with ninja (this might take * quie * a while, be patient. In fact my Pi crashed in the first time. Just reboot and run again):
$NINJA_PATH/ninja
do a ls and you should be able to find the folders lib and include if all went well:
ls
Test if NNPACK is working:
bin/convolution-inference-smoketest
In my case, the test actually failed in the first time. But I just ran the test again and all items are passed. So if your test failed, don't panic, try one more time.
Copy the libraries and header files to the system environment:
sudo cp -a lib/* /usr/lib/
sudo cp include/nnpack.h /usr/include/
sudo cp deps/pthreadpool/include/pthreadpool.h /usr/include/
If the convolution-inference-smoketest fails, you've probably hit a compiler bug and will have to change to Clang or an older version of GCC.
You can skip the qmkl/qasm/qbin2hex steps if you aren't targeting the QPU.
Install qmkl
sudo apt-get install cmake
git clone https://github.com/Idein/qmkl.git
cd qmkl
cmake .
make
sudo make install
Install qasm2
sudo apt-get install flex
git clone https://github.com/Terminus-IMRC/qpu-assembler2
cd qpu-assembler2
makesudo make install
Install qbin2hex
git clone https://github.com/Terminus-IMRC/qpu-bin-to-hex
cd qpu-bin-to-hex
make
sudo make install

Step 2. Install darknet-nnpack

We have finally finished configuring everything needed. Now simply clone this repository. Note that we are cloning the yolov3branch. It comes with the python wrapper I wrote, correct makefile, and yolov3 weight:
cd
git clone -b yolov3 https://github.com/zxzhaixiang/darknet-nnpack
cd darknet-nnpack
git checkout yolov3
make
At this point, you can build darknet-nnpack using make. Be sure to edit the Makefile before compiling.

Step 3. Test with YoloV3-tiny

Despite doing so many pre-configurations, Raspberry Pi is not powerful enough to run the full YoloV3 version. The YoloV3-tiny version, however, can be run at about 1 frame per second rate
I wrote two python nonblocking wrappers to run Yolo, rpi_video.py and rpi_record.py. What these two python codes do is to take pictures with PiCamera python library, and spawn darknet executable to conduct detection tasks to the picture, and then save to prediction.png, and the python code will load prediction.png and display it on the screen via opencv. Therefore, all the detection jobs are done by darknet, and python simply provides in and out. rpi_video.py will only display the real-time object detection result on the screen as an animation (about 1 frame every 1-1.5 second); rpi_record.py will also save each frame for your own record (like making a git animation afterwards)
To test it, simply run
sudo python rpi_video.py
or
sudo python rpi_record.py
You can adjust the task type (detection/classification?), weight, configure file, and threshold in line
yolo_proc = Popen(["./darknet",
                   "detect",
                   "./cfg/yolov3-tiny.cfg",
                   "./yolov3-tiny.weights",
                   "-thresh", "0.1"],
                   stdin = PIPE, stdout = PIPE)
For more details/weights/configuration/different ways to call darknet, refer to the official YOLO homepage.
As I mentioned, YoloV3-tiny does not care about the size of the input image. So feel free to adjust the camera resolution as long as both height and width are integer multiplication of 32.

#camera.resolution = (224, 224)
#camera.resolution = (608, 608)
camera.resolution = (544, 416)

Here are my test results:

1. It worked. Yolov3-tiny on Raspberry Pi 3 Model B+ has a frame rate of 1 frame per sec (FPS). The rpi_video.py will print the time it requires Yolov3-tiny to predict on an image. I was able to get numbers like 0.9 second to 1.1 second per frame. Not bad at all! Of course, you can't do any rigorous fast object tracing. But for a surveillance camera, or slow robot, or even drone, 1FPS is promising. NNPACK is critical here. As pointed out by Shizukachan, without NNPACK the frame rate will be lower than 0.1FPS!

2.Make sure the power supply you are using can truly provide 2.4A (which is desired by RPI 3B). I have seen cases that the detection speed drops to 1 frame per 1.7 seconds because the power supply did not provide sufficient power.

3. It worked limitedly. Yolov3-tiny is not that accurate compared to Yolov3 full version. But if you want to detect specific objects in some specific scene, you can probably train your own Yolo v3 model (must be the tiny version) on GPU desktop, and transplant it to RPI. Never try to train the model on RPI. Don't even think about it.. With pre-trained Yolov3-tiny on COCO dataset, some good transfer learning can be leveraged to speed up the training speed.

4. I didn't modify the source code of Yolo. When performing a detection task, Yolo outputs an image with bounding box, label and confidence overlaied on top. If you would like to get such information in a digital form, you will have to dig into Yolo's source code and modify the output part. It should be relatively straightforward.

Finally, the results. Note that I accelerated the video 5 times. The actual frame rate is about 1 frame per second.


Yolov3-tiny successfully detected keyboard, banana, person (me), cup, sometimes sofa, car, etc. It thought curious George as teddy bear all the time, probably because COCO dataset does not have a category called "Curious George stuffed animal". It got confused on the old-fashion calculator and sometimes recognized it as a laptop or a cell phone. But in general, I was very surprised to see the results, and the frame rate! 



Tuesday, August 21, 2018

Deep Learning With Raspberry Pi - Installation

This is goind to be the begining of a series of posts about fusion of deep learning and Raspberry Pi!



Deep Learning has become a new world language in the recent 5 years. With the latest development in the convolutional neural network, LSTM, attention models, GANs, reinforcement learning, we see a promising trend of training model to do things that in the past human believed only human brain can master. For example, writing a caption to an image, or composing a piece of music, or driving a car. With millions of images/text corpse, properly designed deep neural network model can somehow be calibrated to “learn” specific task without explicit programming. Normally when people talk about training deep learning, people talk about CUDA, GPU matrix operation, and parallelization, massive memory requirement, etc.

Now, as the most popular single-board computer/development kit/IoT board, Raspberry Pi, even the latest 3 Model B+ (1.4GHz CPU, 1G DDR2 RAM), does not have enough computation power to train any decent deep learning model. Forget about training. However, this does not mean that deep learning and Raspberry Pi are exclusive to each other. It is still possible to run a deep learning framework and deep learning model on Raspberry Pi. In fact, it is super fun, and probably also super useful to run forward deep learning on Raspberry Pi. Imaging that your Pi Camera can now identify human being and probably who they are, or issue alert when a bunny is eating your garden, or recognize obstacle for a Pi-powered robot, or display camera frame in van Goghor style, or maybe just play endless Pi-composed Jazz. A new world is enabled by Raspberry Pi + Deep Learning!

As a lazy person, I don’t want to reinvent the wheel. Given that there are well-established, robust, deep learning libraries, such as tensorflow, pyTorch, etc., it makes sense to first try those libraries in the Pi. In this article, I will be showing how to install tensorflow and keras (a high-level wrapper of tensorflow) on Raspberry Pi 3 Model B+ running a Raspbian Stretch (version 9). I haven’t tested the workflow in other Raspberry Pi models or another Raspbian version. However, my intuition told me that Pi 3 Model B or Raspbian Jessie should work the same way.



To proceed, you’ll need to understand basic Linux commands and Python programming and know how to use Raspberry Pi. You do not need to know deep learning, just assume it as a magic black box. I get a lot of help from this post:


    1. Which version of Python? Python 2.7!

Raspbian comes with Python 2.7 and 3.5. Although I am a fan of Python 3 and tensorflow prefers Python 3, for Pi, I still highly recommend Python 2.7. The reason is that installing numpy, scipy and opencv with Python 2.7 is so much easier and hassle-free! The last thing I want to do is to build scipy and opencv from binary on Pi. IT IS GOING TO TAKE FOREVER!

2. Installing pre-request libraries

In order to install/run tensorflow and kera, you have to install numpy and scipy, h5py. I also recommend to install OpenCV, because, come on, we want to do image stuff with deep learning.

I highly recommend installing those libraries pre-compiled. Because Pi is a slow computer, it might take 10 min – 2 hr to install those libraries by compiling binary on Pi. And, forget installing OpenCV from source code on Pi! Trust me, it is a painful process!

So how to install pre-compiled libraries?

DO THIS
pi:~ $ sudo apt-get install python-numpy python-scipy python-h5py python-opencv           

DO NOT THIS
pi:~ $ pip install numpy scipy h5py opencv                                                                       

 The second approach, most times, end up downloading wheel file and run setup.py for long long long time. I think scipy took me more than 30 min and still failed for some reason. The first approach, easy and fast.

    3. Install Tensorflow

I basically followed tensorflow official websites for this part. Some people said that they have to install an older version of tensorflow like 1.0, however, I was able to install 1.9.0 and run it without a problem (well, there were some non-harmful warnings)

First, make sure that you have libatlas library, a linear computation library, is installed. Simply do


pi:~ $ sudo apt-get install libatlas-base-dev                                                                 

Second, let’s install tensorflow. A simple pip install is likely to fail here. This is become tensorflow and some associated libraries will take more than 100MB size, and be default Raspbian has 100MB allocated for swap. If you use pip install directly, highly likely that you will encounter memory errors. There are two ways to overcome this. One is to temporarily change the swap size, install tensorflow, and chance swap size back. This will require to reboot the Pi twice. An easier way, I believe, is to add some additional argument to pip install:



pi:~ $ pip install --no-cache-dir tensorflow                                                                  

In this way, we are installing tensorflow without caching. No need to chance swap size.

Installing tensorflow took a while, as for Python 2 we have to compile some libraries. Time for a cup of coffee.


    4. Installing keras

This took me a while. Because for some reason installing keras wants to recompile scipy and it always fails me due to some dependencies issues. Now I am very sure that I have all key libraries installed for keras, I only want pip install to install keras itself. So finally I realized that I only need to tell pip install to ignore dependencies. To do this, simply type

pi:~ $ pip install keras==2.1.5 --no-cache-dir --no-deps                                                 

I didn’t test other keras version. But I think the newer version should be fine.


5. Test that packages are all installed correctly.


As I said, there are some warnings. But, hooray!

6. Run a pre-trained model

Keras comes with many well-known pre-trained CNN models for image recognition. As a first try, I tested MobileNet, a lightweight small CNN first brought by Howard et al in Google in Apr 2017. The concept of MobileNet is that it is so lightweight and simple and it can be run on mobile devices.
To test it, I downloaded this image


from this website http://www.shadesofgreensafaris.net/images/uploads/mikumi.jpg by typing the following command in terminal
pi:~ $ curl http://www.shadeofgreensafaris.net/images/uploads/mikumi.jpg > image.jpg  

Here is the python code


And, the bottom of the outputs:

So MobileNet does recognize the impala correctly in its first guess. It tooks about 40 seconds to load the 4 million parameter model, and only took 3 seconds to make a prediction. Not bad!

Saturday, February 27, 2016

Raspberry Pi 3 vs Raspberry Pi 2, what do we know about it now

The next big thing happening to Raspberry Pi, a credit-card size mini computer and development board after launched for 4 years, is Raspberry Pi 3 model B.

As various sources have already pointed out, based on RPi's claim in FCC, the new RPi will have a on-board wifi(2.4G only)-Bluetooth module. For example, check my previous post, PCworld and CNX-software, or the FCC website.

Some blur images of the new RPi can be found on the FCC website. Here let me do a simple comparison between Raspberry Pi 2 model B and the new Raspberry Pi and see what we can figure out simply from the circuit board.

First, front side
Top is RPi 2 model B and bottom is the new RPi.



Second, reverse side
Top is RPi 2 model B and bottom is the new RPi.

This is what MagPi said..

Now, what do we know about the new RPi?

1. Name: as shown in the figure, the new RPi is indeed called Raspberry Pi 3 Model B. The tested version is already v1.2 and it was developed in 2015.

2. Processor: The image released on FCC is very blurry. We can't tell for sure tell the exact model number of the processor of RPi 3. However, as advertised by the latest MagPi magazine, the new RPi will be using a 64-bit 1.2GHz ARM processor. For comparison, RPi 2 model B is using 32-bit 900MHz ARM-A7 Cortex.

3. wireless: RPi 3 has an onboard WiFi-Bluetooth module (see the upper right of the reverse side of the Pi). The antenna of the module is on the upper left corner of the front side of the board. It is not very big. I think its performance is probably similar to those USB wifi dongle. With the onboard wireless module, we essentially have one more available USB slot!

4. USB: RPi 3 model B has four USB slots as RPI 2 model B

5. Ethernet: RPi 3 has an ethernet port just like RPi 2.

6. GPIO: Both RPi 2 model B and RPi 3 model B have 40 GPIO pins.

7. RAM: unclear. The image of the new Pi is really blur...

8. Display: both RPi 2 and RPi 3 have standard HDMI port and composite video-audio jack port. However, seems like the new RPi 3 gets an extra video port (probably parallel to HDMI?).

9. others...: some minor rearrangement of chips (diodes, regulators, etc.)



Raspberry Pi 3 is on the way!

Several hours ago, United States FCC (Federal Communications Commission) published documents about an incoming Raspberry Pi 3 submitted by the Raspberry Pi Trading under FCC ID 2ABCB-RPI32 on Feb 26 2016.

The FCC link can be found here 

Apparently the new Raspberry Pi 3 has a built-in 2.4GHz wifi module and bluetooth as shown in the FCC documents. Finally!

Here are some photos from the FCC document. The new RPi looks very similar to RPi 2, same number of GPIO pins, same number of USB slots (however with the built-in wifi module you gain one more!), etc. However we don't know the detailed specs. FCC only care about wireless communication frequency and power, that's why these documents are there. We will need to wait until an official release by RPi foundation.






  Also please check this pcworld post pcworld

Monday, December 28, 2015

3d elevation maps and me on Thingiverse

It has been a while since last time I update my blog. As I was extremely busy with my work and 2 years old, I haven't been able to find enough time playing with DIY hardware stuff. Instead, I start to do something more software.

I have been converting some topographic maps into 3D STL models that are ready for 3d printer. Models I've made include a 6 million:1 Texas elevation map, France (and some west European area)  topographic map, China topographic map and Rocky Mountain National Park in Colorado.

Texas

France

China


Rocky Mountain National Park



Please find those models under my Thingiverse account:
http://www.thingiverse.com/DanielChai/designs

They are free to download.



I found these 3d elevation maps pretty neat because I can touch and feel them and gain a much more intuitive understanding of geology.

I am making more of these 3D elevation maps of course. And feel free to leave a comment if you would like to see some other place in 3D.

Saturday, April 11, 2015

A 3D printer simulator

Recently I wrote a python code that reads G-code generated by Slic3r, interprets the G-code, and creates a simple 3D plot that simulating a real 3D printer. It provides a nice preview of G-code.

The code can by downloaded here from my google drive.

If the link doesn't work, copy the following url and paste to your browser
https://drive.google.com/folderview?id=0B8QP2HPTAprrVUdKMFZaQXJjRFk&usp=sharing


The folder contains:

simulator_config.txt

This is the configuration file. The file should contain:
line 1: G-code file name (including path). e.g., gcodes\squirrel_export.gcode
line 2: nS. A positive integer. A new section will be plotted after skipping (nS-1) sections. Minimal value is 1 (every section will be plotted). Smaller nS gives finner (and larger) plot.
line 3: nP. A positive integer. A new point will be plotted after skipping (nP-1) points. Minimal value is 1 (every point will be plotted). Smaller nP gives finner (and larger) plot.
line 4: dx, dy, dz, de. Resolution of the 3D printer in x,y,z direction and resolution of filament extruder.

Rapibot_simulator.py 

The main program that reads configuration from simulator_config.txt and calls Gcode_interpretation_functions.py and creates plot.

Simple run this code and get a 3D plot

Gcode_interpretation_functions.py

File that contains functions to interprete G-code

gcodes

A folder that contains some G-code examples.


Example 

Here is the example when running the "squirrel_export.gcode" file provided in the google drive folder.

snapshot when code is runing
 
Finished! You can rotate the view or zoom in/out

Wednesday, February 25, 2015

3D printer hot end temperature control system Version 2 [unfinished]

Almost a years ago, I post the first version of a simple temperature control system ultilizing only a LM324 quard op-amp with some resistors and a MOSFET. The system is for 3D printer's hot end.

The controller has two major functions: (a) turning a power resistor on and off automatically to maintain a constant temperature (adjustable manually); (b) showing the real time temperature in the range of 120-260 C. It is simple, low price, and efficient. The circuit board is shown below.
Temperature control system Version 1. See this post for detail. The values of the resistors here are R0=600 ohm, R1=10k ohm, R2=3.9k ohm, R3=6.19k ohm, R4=2.4k ohm, R5=10k ohm.


However, I realized later, also as reader epineh pointed out, the second function, displaying real time temperature, requires a constant +12V power supply. This could be problematic. Because most time people use low price ATX computer powersupply to power their 3D printer. And those powersupply DOES NOT output constant voltage. When there is a large current drain change, say once the power resistor is switched on and off, the power supply's output voltage can jump between 10 and 12V. As a matter of fact, the displayed voltage can be off by 20%!! The control system, on the other hand, works just fine because it relies only on the ratio of the resistance of thermistor and R0.

One solution is to use a logic power supply that output constant 12V for the LM324 and ATX powersupply for the hot resistor R_hot. Or another way is to add a voltage regulator, which turns the unstable 12V to a stable 5V as the voltage reference for LM324. Only one power supply is required. The circuit diagram is shown below.

Temperature control system Version 2. The values of the resistors here are R0=600 ohm, R1=10k ohm, R2=3.9k ohm, R3=6.19k ohm, R4=12.4k ohm, R5=10k ohm.

There are two differences between version 1 and version 2.
(a) In version 2, a LM340T5 +5V voltage regulator is used to convert 10~12V to a constant +5V. So the display will not be affected by the status of the hot resistor.
(b) R4 is changed from 2.4k ohm to 12.4k ohm. This is important in order to correctly display the temperature. I will show why later.

[this post is unfinished yet]


Friday, September 19, 2014

Change car charger's output voltage

Car charger is a must-have today, thanks to smartphones' larger and larger screen. It is very cheap to get a car charger adapter, which converts 12V, voltage of car's socket, into 5V, voltage of USB standard. I have several of these too.

However, Now I am in need of 9V 1A power while driving, and I decide to convert one of my 5 V car charger into 9 V.

Initially I was thinking about using a 9 V voltage regulator, say, LM7809, to do this. I believe it will work, but I found the solution can be even simpler after I actually opened one 5 V car charger.

This is what it looks like inside the 5 V car charger.

On the top is a 2 A fuse, which connects to the anode of the car charger (12V). The two thin metal sheets connect to the ground. The circuit at the bottom does the 12V-5V converting job.

Although those larger aluminum capacitors at the bottom occupy most of the space, the most crucial part is the IC (integrated circuit) under the black wire.

Look carefully into the circuit I found the IC part number is MC34063A, which is a 1.5A step up/down/invert DC-DC voltage regulator. I found its specs online http://www.onsemi.com/pub_link/Collateral/MC34063A-D.PDF


Sounds promising. The regulator can theoretically output voltage from 1.25V to 40V, and it can output up to 1.5A. So it is possible to achieve my goal, 9V 1A, by simply modifying the circuit a little bit.

The datasheet also provide a step-down circuit example, which outputs fixed 5V.


The output pin of MC34063A is pin 2. Resistor R1 and R2 form a voltage divider so that the voltage of pin 5 is
V5 = Vout*R1/(R1+R2),
which gives
Vout = V5*(R1+R2)/R1 = V5*(1+R2/R1).

From the circuit we see that V5 is tight to a 1.25V reference by a voltage comparator, meaning that V5 must equal to 1.25V. Hence
Vout = 1.25V*(1+R2/R1).


Given R1 = 1.2k and R2 = 3.6k = 3R1, there is
Vout = 1.25V*(1+3) =  5V.

 

Look carefully into the circuit I found that the blue resister on the right side of the IC is labeled R1, and the yellow resistor hidden beneath the large black capacitor is labeled R2. R1 links pin 5 with GND, and R2 links Pin 5 and output (the red wire). They match the above schematics perfectly!

Using a multimeter I found that R1 = 1k ohm and R2 = 3k ohm. Hence Vout = 1.25*(1+3) = 5V.d

Now the task is super easy! I removed R2 and replace it with a 6.2k ohm resistor so that
Vout = 1.25V*(1+6.2k/1k) = 1.25V*7.2=9V.

Guess what, after this simple modification, the voltage output of the car charger is 9V!



I opened several other car chargers I have and they all use MC34063A or identical chips. So looks like it is a pretty standard way to make inexpensive car chargers. Therefore the method I used here would work for those car charges as well.