Nov 29, 2011

New myFinancialTracker from RBC

RBC just rolled out new myFinancialTracker, which seems to be very close to  I loved's ability to show all your bank accounts and assets at one place, but my concern was having all my account info in their database.

RBC's new myFinancial Tracker says:

myFinanceTracker now gives you complete control of your finances by letting you view all your accounts in one place.

  • Link your RBC Royal Bank mortgage, investments and loans.
  • Link accounts you have at other financial institutions.
  • Even add assets or liabilities not linked to a financial institution like a loan from a family member or the equity in your house.
See? Now it's same as, so if you are a RBC customer, why don't you give it a try?

Nov 16, 2011

Optimized Oren-Nayar Approximation Shader Code

We presented this approximation code as part of our KGC 2011 presentation. Let me just show the code for those people who don't want to read the whole presentation.  I've also mentioned this to Wolfgang Engel during KGC 2011 because our previous Oren-Nayar code was from his wiki book.

The best part of this approximation code is that we eliminated the texture look-up, which turned out to be bottleneck for us. It is not mathematically correct, but worked fine for our game, Warhammer 40,000: Space Marine.

half ComputeOrenNayarLighting_Fakey( half3 N, half3 L, half3 V, half roughness )
  // Through brute force iteration I found this approximation. Time to test it out.
  half LdotN = dot( L, N );
  half VdotN = dot( V, N );
  half result = saturate(LdotN);
  half soft_rim = saturate(1-VdotN/2); //soft view dependant rim
  half fakey = pow(1-result*soft_rim,2);//modulate lambertian by rim lighting
  half fakey_magic = 0.62;
  //(1-fakey)*fakey_magic to invert and scale down the lighting
  fakey = fakey_magic - fakey*fakey_magic;
  return lerp( result, fakey, roughness );

Related Posts:

Nov 15, 2011

Slides: The Rendering Tech of Space Marine

One that Daniel and I presented at KGC 2011.  First half is a quick overview of our rendering passes, and second half is about Screen Space Decal and World Occlusion technique we used for Space Marine.

Feedback on My KGC 2011 Presentation

After I presented The Rendering Tech of Warhammer 40,000: Space Marine at KGC 2011 with my co-worker, Dr. Daniel Barrero, I was curious what the audience thinks about our presentation.  So, here I compiled all the feedback I was able to find on the net.  All feedback is pretty positive so far.


Original Text: 그래픽스 관련 강연을 주로 많이 들었는데, 괜찮은 강연들이 꽤 많았습니다. 새로운 기술을 설명하는 강연부터, 상용화된 게임에서 어떻게 적용하는지에 대한 내용들이 괜찮았습니다. 특히나 워해머 강연은.. 강추…ㅠ.ㅠ (자세한 내용은 다음 글에…)
Google Translation: I've heard a lot of graphics related primarily lecture, were many there are quite a good speech. From lectures to explain the new technology commercially available in the game of how to apply it was fine for the information. Especially Warhammer lecture. Gangchu ... tires screech (the following article for more information.)
My Translation: I attended mostly graphics-related presentations and a lot of them were pretty good.  Some explained actual theory, and the other explained how to apply those to commercial games. Especially, I highly recommend Warhammer 40K lecture. (Will post more about this in another post later)


Original Text: kgc 1일차 2일차를 통 틀어서 워해머 스페이스 마린의 렌더링 기술 섹셕이 가장 좋은 세미나가 아닌가 싶다. 특히 "우린 수학적인 검증따윈 신경안쓴다. 오직 룩만 좋으면 그걸로 끝이다"라는 얘기가 무한 공감간다 ㅋㅋㅋ
Google Translation:  kgc 1 Day 2 through Day Warhammer Space Marines lay seksyeokyi rendering technology like Is not the best seminars. In particular, "We do not care anymore mathematical verification, only if you're rukman're done with it," the story takes an infinite gonggam ㅋ ㅋ ㅋ
My Translation: Among the lectures presented on 1st and 2nd day of KGC 2011, I think the best one was The Rendering Tech of Warhammer 40,000: Space Marine. Especially, I completely agree with what they said during the presentation: "We are not really Nazi about mathematical correctness.  If it looks right to artists, that's the end of the story".

Original Text: ...해외 유명게임 개발자가 와서 강연해 주고 있는데(클라 프로그래머라면 누구든 꿈꾸는 그런 인물!) 맨 앞자리에서 쳐 졸고 있질 않나...(그래, 워해머40K 스페이스마린 렌더링 기술 강연때 맨 왼쪽 앞에 앉은 너희 두놈) 밖에서 받은 기념품 봉지를 뽀시락거리면서 싱경 거슬리게 하지 않나...
Google Translation: ... Internationally renowned game developers, and which came gangyeonhae (client programmers dream of such a person with anyone!), Hit the top in the front seat asleep, is not it displaced ... (yeah, Warhammer 40K Space Marines sitting in front of the left-rendering technology, you two clowns gangyeonttae ) received gift bags from outside, is not disturbed singgyeong pposirakgeorimyeonseo ...
My Translation: ... While famous foreign game developers, who any client programmers would look up to, were presenting, some stupid students were dozing at the first row.  Yes I'm talking about you two who were sitting at the very front in the Rendering Tech of Warhammer 40,000: Space Marine presentation. Also some students annoyed me by making noise from the gift wrapping paper ...

Original Text: ... 꼭 듣고 싶은 강의는... 11월 8일... 16:30~18:40 = 스페이스 마린의 렌더링 기술
Google Translation: ... Lecturers want to hear it ... Nov. 8. 16:30 to 18:40 = rendering technology of the Space Marines
My Translation: Lectures I really want to attend... Nov 8, 16:30 ~ 18:40 = the Rendering Tech of Space Marine

Original Text: 제가 존경하는 게임개발자이신 포프님입니다 ㅇㅂㅇ.... 처음엔 게임개발에 관심가지고 정보 찾아보다가 포프님 블로그에서 북미게임기업취업에 대해 다루는 글을 보게 되었고 댓글을 달면서 소통하다가 알게된분이심. 작년엔 일이있으셔서 KGC 강연에 참가 못하셨는데 올해는 KGC에 참가하시네요 ㅇㅂㅇ ....(프로그래밍 잘 못하지만 가고싶은...근데 수능이랑 겹치지 ㅋㅋ OTL) 나도 커서 포프님처럼 남들앞에서 떨지않고 뭔가 발표하거나 강연해보고 싶은...
Google Translation: Hi I am Pope, who is a respected game developers ㅇ ㅇ f. .... At first I got interested in game development popeunim search of information on a blog that covers some of North America, game companies look for employment and have learned a comment dalmyeonseo minutes hadaga eccentric communication. Have you participated in twelve lectures last year for letting KGC KGC Did not participate in this year was rubbish ....( ㅇ ㅇ Programming f. .. Then I want to go but do not overlap and ㅋ ㅋ SAT OTL) I tremble at the cursor in front of others like you Pope announced, but something or willing to talk ...
My Translation: Pope, the game programmer I respect, is presenting at KGC 2011. I did get to know him while reading and replying to his blog posts on being a game developer in North America. (Although I'm not a good programmer yet, I wish I could attend his presentation.  But I have to take SAT test that day.. -shrugs-)... In the future, I hope to present something in front of people confidently just like Pope.

Original Text: 다니엘 베리노 렐릭엔터테인먼트 프로그래머와 킴 포프 수석 그래픽 프로그래머도 공동연사로 나선다. 둘 다 게임엔진과 게임그래픽 분야의 대가다.
Google Translation: Daniel and Kim Pope berino relrikenteoteinmeonteu senior graphics programmer programmer co-speakers out there. Both the soldier in the field of game engine and game graphics.
My Translation: Also Daniel Barrero (Graphics Programmer) and Pope Kim (Senior Graphics Programmer) from Relic Entertainment are co-presenting at KGC 2011.  Both are the masters of Game Engines and Computer Graphics.

Oct 27, 2011

I Figured Out Wild and Free Chords

I have been trying to find the correct guitar chords for Wild and Free by Damian Rice for a while without any success.  The closest one was this, but still not 100% correct.

So I just did it by myself.... Enjoy.. if you know how to play guitar and you are a fan of Damian Rice just like me. For lyrics, take a look at the above link. meh...

verse e - bm - c#m - g#m

prechrous a - e - bm - c#m

chrus f#m - c#m

(Damian does a simple finger trick with the last c#m chord. Lift and press again fourth finger at the first time and then fifth finger at the 2nd time)

Jun 19, 2011

I use int 3 for assert()

C has assert() function.  Sure.. I don't use it.  I don't like the call stack it gives me.  It is rather confusing: I want the line with the problem at the top of my call stack. Sure, there's a way to unroll stack for sure. Too much hassle. Instead, I use this for my own assert.

#define ASSERT(expr, ...) if(!expr) __asm{ int 3 }

What it does is basically breaking at the code where this ASSERT happens.  So when I use this assert I would do something like this:

ASSERT( life == sucks, "LIFE CAN ONLY SUCK");

This string message is for my own reference.  When the code breaks in debugger, it simply shows that code line, so I know what it is right away.

As it relies on HW interrupt, it only works on PC.  If you want this to work on PowerPC CPUs, I heard you have to use this instead.

#define ASSERT(expr, ...) if(!expr) asm{trap}

Happy coding. Yay?

Jun 9, 2011

Inside of char* string buffer initialization

I used to initialize a string buffer this way:

char temp[64];
temp[0] = 0;

A few years ago, one of my coworkers at my previous studio, Capcom Vancouver, told me the following way is better:

char temp[64] = {0, };

I don't remember why he said it was better. I've been just using this way since I'm a nice guy who trusts one's coworkers.  But finally I figured it out.... well... by accident...

The other day I was doing some profile captures on Xbox 360 and I happened to see how the above code gets compiled into. Once compiled, it turns into this:

char temp[64];
memset(temp, 0, sizeof(char) * 64);

Interesting, huh? It takes a few micro secs, so not that bad, but now I think this type of initialization is not always necessary if the buffer is always filled by strcpy or similar functions right after. (so as long as it eventually becomes null-terminated)

May 30, 2011

Screen-space Lens-Flare in HomeFront?

I consider myself a practical graphics programmer. I believe mathematically correctness is less important than what looks right(or okay) to gamers.

I recently saw an interesting lens-flare technique that goes along with my belief in a game called HomeFront.
In this game, lens-flare effect is a mere full-screen overlay of a bubble patterned image, which is revealed only on the pixels where bright lights are.

Look at my awesome picture below:

So from the left top picture, let's say the yellow part is where bright light is. (and the chance is you probably have some type of HDR buffer already to do effects like bloom.)  Then it'll use the luminance on each pixel as blend factor for lens flare bubble texture, making the final scene reveal bubble pattern on those bright pixels

I found this lens flare technique looks good enough when the high luminance area is small enough. The only time it looked a bit weird was when a large light, such as campfire, covers a lot of pixel spaces, revealing too much bubbles altogether. It almost made me feel like I was doing bubble bath. Hah! But I won't complain. 

Given that HomeFront was made by our sister studio, Kaos, I can probably ask them if my speculation(?) is correct,  But if I do so, I won't be able to write this blog post without going through our legal team. So let's just leave it as my own speculation.

I liked this technique. That's all I wanted to say.

p.s.  I saw this technique on PC version.

May 21, 2011

How to Add Generic Convolution Filter to NVTT

A few month ago, I said I would write a post about how to add a generic convolution filter to NVidia Texture Tool once I get a clearance from our legal team.  And finally they got back to me.

The reason why I added this feature at work was because our artists wanted a sharpening filter on mipmaps.  This feature was present with the original NVTT 1, but removed from NVTT 2.  Given that sharpening filter is a simple 3x3 or 5x5 convolution filter, I've decided to add a generic convolution filter support which can take any arbitrary coefficients. With this approach, anyone can run almost every image processing algorithms based on on convolution.

NVTT Modification
So here's how.  It requires only a few lines of change on 6 files. So I'll just walk you through.

Step 1. Get revision 1277 from NVidia Texture Tools project page.
I haven't tested this on later revisions, but I think it should work unless there were major changes in that source code.

Step 2. Open up src/nvimage/Filter.h and add this constructor.

Kernel2(uint width, const float * data);

Step 3. Open up src/nvimage/Filter.cpp and add this function.
Kernel2::Kernel2(uint ws, const float* data) : m_windowSize(ws)
    m_data = new float[m_windowSize * m_windowSize];

    memcpy(m_data, data, sizeof(float) * m_windowSize * m_windowSize);

Step 4. Open up src/nvimage/FloatImage.h and add this function prototype.
NVIMAGE_API void doConvolution(uint size, const float* data);

Step 5. Open up src/nvimage/FloatImage.cpp and add this function implementation.
void FloatImage::doConvolution(uint size, const float* data)
        Kernel2 k(size, data);
        AutoPtr tmpImage = clone();

        for(uint y = 0; y < height(); y++)
            for(uint x = 0; x < width(); x++)
            for (uint c = 0; c < 4; ++c )
                pixel(x, y, c) = tmpImage->applyKernel(&k, x, y, c, WrapMode_Clamp);

Step 6. Open up src/nvtt/nvtt.h and add this function prototype under struct TexImage.
NVTT_API void doConvolution(unsigned int size, const float* data);

Step 7. Open up src/nvtt/TexImage.cpp and add this function implementation.
void TexImage::doConvolution(unsigned int size, const float* data)
    if (m->image == NULL) return;


    m->image->doConvolution(size, data);

How to Use
How to use this is very straight forward. Assuming you already have a TexImage object named image, you can do this.

const int kernelSize = 3;    // let's use 3 x 3 kernel

// Some random coefficients I found working great for sharpening.
const float sharpenKernel [] = 
    -1/16.0f, -2/16.0f,     -1/16.0f,
    -2/16.0f, 1 + 12/16.0f, -2/16.0f,
    -1/16.0f, -2/16.0f,     -1/16.0f,

image.doConvolution(kernelSize, sharpenKernel);


p.s. I've also emailed the patch file to Ignacio, the creator/maintainer of NVTT project.  Let's see if it ever makes into the codebase. :)

May 19, 2011

Theorycraft = Witchcraft? Maybe

Although I can't deny that posts from a lot graphics programming blogs help us to learn new cool stuff, I'm also often worried about the quality of posts, especially when people claim something not entirely true from a pure "theorycraft" instead of actual experience.  Things that make sense on theory don't necessary make sense in reality, that is.

If you are a decent graphics programmer, you should take only empirical results as truth.

May 16, 2011

Oren-Nayar Lighting in Light Prepass Renderer

This is a conversation I had with another graphics programmer the other day:

  • A: "Using Oren-Nayar lighting is extreme hard with our rendering engine because IT is Light Pre-Pass renderer."
  • Me: "WTF? It's very easy."
  • A: "No. This blog says it's very hard."
  • Me: "Uh... but look at this.  I've already implemented it in our engine 2 years ago, and it was very trivial."
  • A: "OMG." -looks puzzled-
Okay. So I explained to him how I did it. And I'm gonna write the same thing here for the people who might be interested.  (I think the original blog post wanted to say supporting various lighting models is not easy in a deferred context, which is actually a valid point.)

First, if you don't know what Oren-Nayar is, look at this amazing free book. It even shows a way to optimize it with a texture lookup.  My own simple explanation of Oren-Nayar is a diffuse lighting model that additionally takes account of Roughness.  

Second, for those people who don't know what Light Pre-Pass renderer is, read this.

K, now real stuff.  To do Oren-Nayar, you only need one additional information. Yes, roughness.  Then how can we do Oren-Nayar in a Light Pre-pass renaderer?  Save roughness value on the G-Buffer, duh~.  There are multiple ways to save roughness on G-Buffer and probably this is where the confusion came from.

It looks like most light-prepas approaches use R16G16 for G-Buffer to store XY components of normals.  So to store additional information (e.g, roughness), you will need another render target = expensive = not good.

Another approach is to use 8 bit per channel to store normal map, but you will see some bending artifacts = bad lighting = bad bad. But, thanks to Crytek guys, you can actually store normals in three 8-bit channels without quality problem. It's called best-fit normal. So once you use this normal storage method, now you have an extra 8 bit channel that you can use for roughness.  Hooray! Problem solved.

But my actual implementation was a bit more than this because I needed to store specular power, too.  So I thought about it.  And found out we don't really need 8 bits for specular power(do you really need any specular power value over 127?  Or do you really use any specular power value less than 11?)  So I'm using 7 bit for specular power and 1 bit for roughness on/off flag.  Then roughness is just on and off? No. It shouldn't.  If you think a bit more, you will realize that roughness is just an inverse function of specular power  Think this way. Rougher surface will scatter lights more evenly, so specular power should be less for those surfaces and vice versa. 

With all these observations, and some hackery hack functions, this is what I really did at the end.

G-Buffer Storage
  • RGB: Normal
  • A: Roughness/Specular Power fusion
Super Simplified Lighting Pre-pass Shader Code

float4 gval = tex2D(Gbuffer, uv);

// decode normal using crytek's method texture
float3 normal = decodeNormal(;  

float specpower = gval.a * 255.0f;
float roughness = 0;
if (specpower > 127.0f)
    specpower -= 128.0f;
    roughness = someHackeryCurveFunction(127.0f - specpower);

// Now use this parameters to calculate correct lighting for the pixel.

Ta da.. not that hard, eh?  This approach was faster enough to ship a game on Xbox 360 and PS3 with some Oren-Nayar optimization through an approximation.

May 9, 2011

Apr 21, 2011

Personal Choice of Version Control System

UPDATE: I have changed my mind on this. Read my new blog post on this.

Being in the gaming industry about 10 years and playing with some open-source projects means I have dealt with different version control systems, or VCSs, from complete free, brute-force manual file copy method to very expensive commercial-grade Perforce.

So which program am I using at home? Subversion.... I know! A lot of people will argue that other programs are better, and I am not gonna say they are wrong. The reason why I'm using Subversion is because it does what I want with the least amount of annoyance.  Below is the list of what I need/want from my personal VCS and how the most popular VCSs do the job:

Windows Support
Yes, I'm a MS whore.  I use Windows all the time, and I, as a game programmer, personally don't see huge need for Linux for myself.  Also if I can maintain only one OS at home, that's less drama for me. (Yay?).
  • Git(-2): I like Git a lot, especially how it handles branches, so I really wanted to use it on Windows.  But, as far as I know, the only way to use Git server on Windows is through Cygwin or msysgit.  Cygwin is basically doing Linux emulation in a sense, and I personally don't enjoy installing Cygwin. msysgit is a bit easier to install on Windows, but still I had to set up SSH or what not, so there's no one-button solution for Git on Windows.  So big no-no to a MS whore like me.
  • Perforce(+1): Perforce supports Windows pretty well.  It comes with easy-to-install server program/service for windows.
  • Subversion(+2): This was actually a big surprise to me.  There is a program called VisualSVN Server, which is one-click solution for Subversion server on Windows.  It just works and comes with https access and access control all in one nice and simple GUI.  This was even easier than installing Perforce.
Occasional Multi-User Support
Although my VCS is mostly there to keep a history and backups of my own codes, sometimes I open it up to my friends so that I can get useful feedback from them.  So having multi-user support is very useful for me.

  • Git(+1): Git can easily support multiple users.  But setting access control for each users can be a bit of PITA on Windows.  When I tried it last time, I had to make fake Windows user accounts and hook'em up with SSH.
  • Perforce(-1): Perforce is free for either i) 2 users and 5 client workspaces, or ii) unlimited users and up to 1000 files.  I have more than 1 friend (at least I wanna believe that :P ) so first option doesn't seem to work that great.  What about 1000 files and unlimited users?  Well, I've already passed 1,000 files: Playing with 3rd party open-source project breaks this boundary very easily.  Furthermore, I don't want to pay a few hundred dollars for the simple need of VCS.
  • Subversion(+2): As I said earlier in Windows Support section, VisualSVN Server comes with a nice GUI where you can simply setup users and access control.  So another big thumbs up from me.

I'm cheap. I love free stuff.

  • Git(+1): free
  • Perforce(-1): free for limited use. And apparently I'm not limited?
  • Subversion(+1): free

GUI Client
I'm in love with Perforce's nice GUI clients. Not so much with P4V; more with P4Win. But P4Win is discontinued.... Oh well, P4V is still good enough.  Sure, I still use command-line a lot for certain things GUI clients don't support, but I found 90% of time, using GUI clients are much faster and easier.

  • Git(-1): it doesn't have any nice free GUI client as far as I know.  There are some being developed at this moment, but they don't seem to be mature or free enough to use them.  TortoiseGit is good enough most of the time.  But I still prefer P4Win style, real GUI clients. 
  • Perforce(+2): P4Win is awesome. P4V is great, too.
  • Subversion(+1): I found a program called SmartSVN.  It has limited functionalities unless you buy the pro version, but I found the basic free version is good enough for day-to-day operations.  Anything that cannot be done through the free SmartSVN version, I use TortoiseSVN.  Then anything that cannot be done by TortoiseSVN, I use command-line.  
Who doesn't love branching?  It's such a neat tool to fuck around(read it as experiment) your code without ruining your projects.
  • Git(+2): I love the powerful branching feature of Git. You don't need to make a copy in different directories, so it helps a lot with path referencing in the code.  Say you have a program that links with library Awesome, and now you wanna branch library Awesome.  With Git, you simply need to switch to different branch and build.  But with other source control systems like Perforce, you will have to branch the library into a different directory and change the library path in your program code.
  • Perforce(-2): As I just explained in Git section above, branching into different folder sucks.  Also the speed of branching a large number of files is slow because Perforce server controls everything.  Network speed is slower than your HDD's spin rate.
  • Subversion(-1): The speed is fast enough.  But still you have to branch into different directory.. uggh.. that bothers me.

Final Score
So final score for me is like this.
Don't forget.  This is the score for my personal need. Not for the big giant game studios.  So if you ever comeback and say "but Perforce is better because it can supports 200 users easily", I'm gonna make you watch this video for 2 hours before you go to bed.

Mar 31, 2011

Added Generic Convolution Filter to NVidia Texture Tools

I've recently added generic convolution filter to the NVidia Textures Tools version I'm using at work. (This is based upon unreleased version that I grabbed from NVTT Google Code page.  I wanted to contribute it back to the code base or share with the public, so I asked for the permission from the work, which I'm still waiting for.

At the end it's not hard.  If you just dig around the source code, you will find a way to implement it in less than an hour.  Or, just wait until I get the OK sign from my work :)

Mar 2, 2011

Missing Documentation: tex2Dlod()

Today, I tried to use tex2Dlod() to generate the full mip chain of a texture. HLSL function, tex2Dlod() takes float4 as texture coordinate and the 4th component is supposed to specify which mip level it should sample.

So my question was "what value should I pass if i want the 2nd highest mip." I had two educated guesses:

  1. texcoord = float4(u, v, 0, 1);


  2. texccord = float4(u, v, 0, 1 / number of total mip levels)

Unfortunately, I was not able to find any documentation on this: MSDN doesn't explain it and I couldn't find it on Google either.  (my Google God status is being challenged here. :-) OMG! )

My co-worker Doctor Barrero did a quick test on Render Monkey, and the answer is 1. (the first choice, or passing simply 1 as the 4th component).  Also you can deduce the same answer from D3D Sampler State, D3DSAMP_MAXMIPLEVEL, which is documented as:
D3DSAMP_MAXMIPLEVEL: level-of-detail index of largest map to use. Values range from 0 to (n - 1) where 0 is the largest. The default value is zero.
By the way, this works same both on DirectX and OpenGL.

Jan 24, 2011

6 weeks paid sick days

So one of my students got a new job in a game studio, and being his awesome teacher, former "sorta" lawyer and good buddie, I reviewed the job offer for him.  It was pretty standard contract for game industry: non-competition, flexible work hours, 3 weeks vacation, and... WHAT?! 6 WEEKS PAID SICK DAYS?

Yes this is real.  The contract didn't use the term "Sick Day", but it is explained very well in plain English.  It says some thing like this:

if you can't come to work because you are sick or injured, the employer will still pay regular salary and provide continued benefits and insurance coverage to the point where the employee qualifies for other income replacement benefits or to a maximum 6 weeks.
I'm glad to see there are still good companies out there who try to treat employees well in the game industry.  I'm actually kinda jealous now :)

Question: How many sick days(and vacation days) do you get from your employer?

Jan 20, 2011

VirtualBox hangs while installing on Windows 7?

I had this problem with both Virtual Box 3.3 and 4.0 on Windows 7.  While installing or uninstalling, it hangs and I can't even close the installer from the task manager: the only remedy was to hard-reset my computer... blarg...

It turned out to be Virtual Box is doing something with my network connection.  Not sure what it is exactly, but I found an easy solution:

  1. Disconnect your LAN cable from the back of your system
  2. Install or uninstall VirtualBox
  3. Plug back in your LAN cable

Jan 17, 2011

Booting up to command-line on Ubuntu 10.10 Maverick

I've finally installed Ubuntu 10.10 over the weekend.  My main motivation for installing Linux (first time in my life) was to have a test server for web programming I'm doing for fun.  Since I really didn't care about Linux GUI, I installed Ubuntu Server as a base and installed ubuntu-desktop(apt-get install ubuntu-desktop) top of it, which made everything ugly: whenever I boot up, it showed GUI login screen (GDM).... yuck...

So I tried to disable GDM.. I did a lot of google search, but all the tips I got from the web didn't work for me.  I suspect those tips work for older versions, or direct Ubuntu Desktop installation.  Finally, with my linux/unix guru buddie, Daniel's help, I managed to achieve this in a correct way. :)  This is how I did:

  1. edit /etc/init/gdm.conf ("sudo vi /etc/init/gdm.conf" for me)
  2. change where it says:

    start on ( filesystem


    start on (runlevel [5]

  3. change

    stop on runlevel [016]


    stop on runlevel [01236]

apparently this makes all the run level same as Unix.... (On the web, there was another suggestion to achieve this by changing only "stop on runlevel [0126]", but this was giving me a boot-up freeze....)

so there you go, if you have same problem with me, hopefully this will fix your problem :)

Jan 15, 2011

Brain Fart of This Week - Multiply Blend "Op"

I absolutely love my darn stupidity. :) So I had to use the multiply blend op this week at work to implement some kinda per-geometry glow effect that our FE artist mocked up in Photoshop.  Unfortunately, he didn't have the PSD file around anymore, so I had to believe what he said: it is using a additive blend layer in Photoshop + outer glow.

I was not planning to implement a real glow pass, so I decided to support that additive blend only and let our HDR bloom pickup the over-saturated value and do the "outer glow" for me.  So I simple re-submitted those glow geometries with a simple colour shader after turning on the additive blend......

The result was not great.. sure... It turns out to be he didn't use the additive blend in Photoshop; it was actually the multiply blend........ and guess what? I totally forgot the Photoshop doesn't even have a blend mode called ADDITIVE to begin with... uh.. first brain fart...but still not the best part :)

Then 2nd brain fart was even more fun... I thought I could quickly turn on the multiply blend on D3D and show the result to him.. but I couldn't find D3DBLENDOP_MULTIPLY, so I was like.. "hmm really? multiply blend mode is not supported on PC.. maybe it's only supported on X360 and PS3..."  Then later my lead reminded me that multiply blend can be done by using dest color as source blend alpha... uh... awesome brain fart.. I knew this.. but I completely forgot yesterday...

oh well, story of my life... :)

ps. the blend mode supported by Photoshop: