After all of the difficulties that I’ve been having with setting up Linux on my new ASUS N53SV laptop (see earlier posts), I get an email last week indicating that their is a defect in the Sandy Bridge processor and Intel is doing a recall. I was kind of getting excited to help with the testing process of possible solutions to the whole switchable graphics debacle, but with a free ticket to get a new and compatible computer, I’m not stressing in the least.

I went ahead and ordered an N5JQ just this last weekend and got the computer in today. This top is minimally less powerful – it’s still quad core, but 1.7 GHz instead of 2.0. It has the 425M nVIDIA chip without Optimus – a little bit slower than what comes with the N53SV, but barely. Everything else is just about exactly the same.

I’m happy to say that graphics are working right out of the box (well, after installing the proprietary nVIDIA drivers upon being prompted). I’ll admit thought that until I installed those drivers, the maximum screen resolution was showing up way under the 1360×780 (or whatever) that it was supposed to. Once I rebooted after installing the drivers, everything looked as it should.

I also just managed to get suspend working (haven’t checked hibernation yet) following these directions (see post by John Dias). Anyone with a USB3.0 computer is likely to have these problems and need to implement this hack temporarily until suspend and hibernation are properly supported with the xchi drivers (on a side note, it’s worth pointing out that Linux was the very first operating system with USB3 support – this little something just got left for later :-\) My understanding is that the 2.6.27 Linux kernel should also fix this problem (that means Ubuntu 11.4 Natty). So, another potential solution is to try pre-release versions of the kernel or Natty (or whatever your flavor is). Just remember to remove this hack if and when you do upgrade, since it really is just a hack – you don’t want it interfering with something actually written to solve this problem the right way.

Well, that’s it for now. I’m working on getting my environment all set up on the new computer. I intend to write a little soon about some of the rails/ruby stuff that I’ve been working on lately – as soon as I get everything moving along again. I would also like to make some math related updates soon (category theory and such).

So – an update in this little adventure of mine. It seems that the ordeal of getting the nVidia GT 540M on board the N53SV working is going to be trickier than I was hoping.

It seem that the issue is that the Optimus switching technology on-board this new processor is mostly at the software level, and is HEAVILY coupled to Windows 7 (and maybe Vista, I forget). This operating system has some technology whereby you can, while running, switch which graphics card you are using (assuming you have two installed), so the Optimus technology takes advantage of this in order to allow for switching between the nVidia card and the CPU integrated graphics. And the way it works is that the GPU actually pipes everything through the CPU graphics, and does so entirely based off of whether or not the graphics processing going on is heavy duty enough to warrant the GPU. This is what allows the Optimus to quickly switch between the GPU and the iGPU as needed.

Cool technology, really, only that it is hopelessly tethered to Windows. This means that getting it working on Linux would require a huge undertaking of rewriting large chunks of Linux base code (I forget at which levels – I think in Xorg and kernel). This is why nVidia is pretty much refusing to work on Linux support – it’s really a huge job. And apparently, just coding in a switch so that the GPU is just always on is not so simple as a result of all of the complications in the Optimus architecture. I’m not sure why, but from what I understand that is just he simple fact of the matter.

There is, however, some bit of hope in the form of a switchable graphics Linux group. They seem to be making good progress on several switching mechanisms, but the Optimus does seem to pose significant difficulties. Some progress has been made here, but I have yet to be able to test that out myself. Right now, my system is mostly stable, and since I need it that way for development, I am reluctant to start messing around too much, at least just yet (see later).

As it stands, I went ahead and followed some advice in this post by buzz and installed some packages that should have enabled my Sandy Bridge integrated graphics card, but I’m not so sure that it has worked out for me. I seem to have gotten performance in the handling of the windowing system (fewer random crashes and such overall), but other weird things have happened. Before going through buzz’s steps, my Matrix3D OpenGL screensaver was working just fine, but now it crashes the computer. It took me a while to figure out what was going on and why the computer was crashing whenever I left it alone for a while, but the diagnosis was sealed when a) I went to the screensaver settings to see if something was up there and the screensaver preview crashed the system; and b) I shut down the screensaver form the command line and have since been able to have a pretty stable system (only one crash after a couple days of being on, running upwards of 60 or 70 tabs in Firefox 4 groups, virtual machines, text editors, Songbird,… – are you getting a sense now for why I wanted a Quad core with crap tons of memory?).

Other weirdness is that now the hibernate functionality, which was kind of working before, now seems not to work at all, despite having tried out the fix here. The problem is that I didn’t do as much testing as I should have before trying to also get the Sandy Bridge graphics activated, so I’m not sure if the USB3 “fix” actually worked or not, or whether there was something that I missed with it.

In short, my system is viable for development at the moment [<ruby-set-up-sidenote> after futzing with my rvm setup, and realizing that I didn’t have openssl and readline set up as rvm packages (as apposed to just system packages) and then also realizing that uninstalling and reinstalling each of the ruby installations is not enough – each one has to be removed, so that all the configuration headers (or whatever) are reset to point to the same place as the actual packages – GOTCHA <end-sidenote>].

In order to get things running more smoothly, and help contribute to the testing of solutions attempting to deal with all of the issues I’m facing, I want to set up another partition with a fresh Linux installation (still debating about whether to create a shared home, shared media (as in video and audio) partition or what) that I can play with without having to worry about wrecking my SSD (sufficiently stable for development) environment.

Once I get this done, I’m gonna really start having at it and probing out all of these issues. I’m kind of excited to roll up my sleeves and pick up some new skills in this process. The moral of this story is that though Linux has definitely gotten much better since the last time (5 or 6 years ago) I tried working with it, it is still important to do some research regarding compatibility issues before buying a computer with the intent of putting Linux on it as a primary operating system. With versions like Mint, it seems that there is support for most of everything you need, but there are still brand spanking new technologies that come out and throw everyone for a loop – so don’t get stuck!

So – My MacBook Pro has crapped out. Well, at least partly. I think that the battery (third party, was having issues) fried the motherboard and made the trackpad and keyboard inoperable. To fix, the Geniuses said that a new motherboard costing > 800 would be required. Lame-O! That plus a new battery and power chord and I would be looking at the cost of a new computer, and I was already starting to feel the ceiling of my computer’s processing and memory capacity. I certainly don’t have enough dough right now for another MBP and was feeling adventurous, so I decided to get a PC that I could throw Linux on. After much research, investigation and such, I made the decision to get an ASUS N53SV-A1. This amazing laptop has the latest generation of Intel i7 processors, is quad core (8, effectively, with hyperthreading) and runs at 2.0 to 2.8 GHz (if you overclock). It supports up to 16 GB of memory and comes with an NVidia GeForce GT540M graphics card. This card allows for flexibility in switching between the CPU’s onboard graphics processing when it doens’t need the extra juice to save battery life (Optimus technology). Pretty happy with the decision, all in all. However, the setup of Linux Mint has been a little more difficult than I was hoping. For the most part everything went fine with the install. Was able to boot into the new installation of Mint 10.10 Julia without any issues. However, during the first boot, it mentioned that it needed to get the NVidia driver to take advantage of all of the features which the card has. So, I let it install, but upon rebooting, it wouldn’t load the gui, only take me to the terminal. After doing some snooping around I discovered that by trying to manually start the gui (xinit or some such command), I got an error that said “no screens found” with some other nonsense. After further looking into this and messing around with things until the point of having to reinstall, I discovered that it appears that nVidia doesn’t actually have drivers for the GT540M yet. So, no wonder it wasn’t working. It seems my only recourse now is to wait until they come out with them. Until then, I’ll have to live with the CPU graphics processing. Since it’s a i7 Quad Sandy Bridge, this shouldn’t be too big a deal for the moment. Setting up OpenCL and Mint visual effects will have to wait (sadly), but in the meantime, I can focus on getting my development environment set up and will probably have better battery life in the interim. So, one other strange and glitchy problem I’ve been having. It seems that whenever I go into suspend mode, I get this weird error at the terminal (process:449): GLib-WARNING **: getpwuid_r(): failed due to unknown user id (0) Everything becomes pretty much unresponsive, and I haven’t yet figured out how to beat this one. Hibernate does seem to mostly work though (even if I do get some strange message about USB something or other briefly as it resumes from hibernation), even if I have to hit ctrl-alt-F1 to get back in. As the story goes, man goes to doctor, man says, while moving his awm, “Doctor, it hurts when I do this.” Doctor says, “Well, then don’t do that.” For now, I have no problem just sticking to hibernation, and have screen closing set to trigger hibernation now instead of suspend. So, I should have known that getting such a spanking new model would inevitably lead to an issue or two, and I can live with that. The bottom line is that it is mostly working, and I expect that within a month or two, everything will be right as rain. I’m definitely excited to delve into being a Linux user. I don’t expect that it will be too far of a stretch from what I’m already used to – as it is, with my development work I have a terminal open 90% anyway. I’ll continue to update as things progress. Please do leave a comment if you either have discovered a solution to any of the issues mentioned or if you have any questions pertaining to this setup. Studying for the mathematics subject GRE test, I have lately been going over differential equations material. The section in the Princeton Review test preparation book on the subject is a little suboptimal in my mind – it runs through things a bit too quickly, not tracing out the basic explanations about how things are working to the same degree as in other sections. To add to this situation, when I studied differential equations back at Evergreen, the route our professor took was heavier on the qualitative side than the bag-of-tricks side. The reasoning for this was that often in the study of differential equations, students find themselves learning all of these great tricks and come to expect that those are the tools which will help them solve their problems. When they get out into the real world, they find that applying the tricks is usually very difficult, and sometime none of them apply. Other times, you find a trick that works but gives you such a mess of an answer that you may as well not have solved for it at all. Qualitative techniques are valuable because they enable you to extract information (sometimes all you need to know within whatever context is being worked on) without necessarily solving the equation, and usually, these techniques are much easier to apply. Liking this trend so much, I went on to study non-linear dynamics and not take partial differential equations (though I’d like to at some point), which the professor used to doll out the tricks. Together, a skimpy section on diffeqs in the GRE book and a more qualitative study of differentials in school have left me with a little extra work to do on this topic. En route, some interesting stuff (new to me) came up that I thought I would sketch out here. The tidbit in question is the relationship between exact and non-exact differential equations. The basis of exact differentials stem from the following: If you have a family of curves $f(x,y)=c$, they must obey the total differential equation $df = 0$. The total differential is given as $\displaystyle df = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy$ in the book. Once looking at this I could kind of see what it was doing, but the concept didn’t really make full sense to me (in that crystal clear way I like it to when I’m doing mathematics) until I started exploring the concept of the total derivative (with the help of Wikipedia). The total derivative of $f$ with respect to $x$ is given as $\displaystyle \frac{d f }{dx}= \frac{\partial f}{\partial x} \frac{dx}{dx} + \frac{\partial f}{\partial y} \frac{dy}{dx}$, which can be obtained informally from the equation above by “dividing through by the differential $dx$“. This derivative gives us a measure of the total degree to which $f$ is changing with respect to $x$ when there is an implicit relationship between $y$ and $x$. Of course, this is the case if we are assuming that $f(x,y)=c$. Once I figured this out, the pieces started falling into place more clearly. Since we could differentiate with respect to either $x$ or $y$ and then “multiply though” by the corresponding differential to get the same total differential form above, it makes sense that this total differential should add up to $0$, since both of the total derivatives do. So, moving on, the text goes on and describes that any differential equation of the form $M(x,y)dx + N(x,y)dy = 0$ where $\displaystyle M(x,y) = \frac{\partial f}{\partial x}$ and $N(x,y) = \frac{\partial f}{\partial y}$ would naturally have the family of curves $f(x,y)=c$ as part of its solution space. (Note also that with some continuity assumptions, and limitations associated with the range of possible starting points given by $c$, the Uniqueness Theorem implies that these are the only solutions). Assuming the continuity of the second partial derivatives, it also follows that we can tell if there exists such an $f$ by looking to see if $M_y = N_x$ (the partial derivatives). If so, then it follows that we can compute (or try to compute) the integrals $\displaystyle \int M(x,y) \partial x$ and $\int N(x,y) \partial y$ and adjust the constants of integration (which will be single variable functions of $y$ and $x$ respectively) so that the two resulting integrals match. That matching integral is out $f(x,y)$. So this is all fine and dandy, and I was happy to get through this little bit, but then I started wondering about these non-exact differential equations. They went on to discuss equations of the form $\displaystyle M(x,y)dx + N(x,y)dy = 0$ Where we don’t have such nice and simple conditions on $M$ and $N$. It showed how in some cases, you can come up with an integrating factor by which you can multiply both sides of the equation directly above and come up with an equivalent equation which is exact. They then went on to show a trick that works in a couple of very specific cases. The cases in question are when either $\displaystyle \frac{M_y - N_x}{N}$ is a function of $x$ alone or $\displaystyle \frac{M_y - N_x}{-M}$ is a function of $y$ alone. The integrating factor in the first case (and similarly in the second), if one lets $\xi(x) = \frac{M_y - N_x}{N}$, is given by $\mu(x) = e^{\int \xi(x) dx}$. This seemed like a pretty cool trick, but I quickly became curious about how it works, so I decided to find out. All I would need to do (assuming the continuity of these second derivatives) is to show that $\frac{\partial}{ \partial y} M(x,y)\mu(x) = \frac{\partial}{\partial x} N(x,y)\mu(x)$. The first derivative here was easy with the product rule – the fact that $\mu$ doesn’t depend on $y$ means that we can treat it like a constant and get $\frac{\partial}{ \partial y} M(x,y) \mu(x) = M_y \mu(x)$ Computing the second derivative by the product rule, we get \begin{aligned} \frac{\partial}{ \partial y} M(x,y) \mu(x) & = N_x \mu(x) + N \mu'(x) \\&= N_x \mu(x) + N e^{\int \xi(x)dx} \cdot \frac{d}{dx}\int\xi(x)dx \\&= N_x \mu(x) + N \mu(x) \cdot \xi(x) \\&= N_x\mu(x) + N\mu(x) \cdot (\frac{M_y - N_x}{N}) \\&= N_x\mu(x) + \mu(x) \cdot (M_y - N_x) \\&= M_y\mu(x) \end{aligned} And so, sure enough, we have that $\frac{\partial}{ \partial y} M(x,y)\mu(x) = \frac{\partial}{\partial x} N(x,y)\mu(x)$, as desired. A neat trick, to be sure. I’m curious to see if there is a way of deriving it from assuming that such a integrating factor exists and trying to solve for it. I may get to that at some point, but for now am satisfied that I have a better sense for why this works. What I am even more interested in with this is the claim that if a non-exact equation has a solution, then there exists an integrating factor for it, though it may be difficult to find. I definitely want to come back and visit this all at some point but for right now need to get back to moving through the practice booklet. I’ve been maintaining a site with Posterous for some time now, and while I like many features of it’s design (certain elements of simplicity in general), there are some things that I don’t care for about it. One of the biggest is that there is no way to simply embed beautifully typeset $\LaTeX$ equations and mathematical expressions. This is a serious let down to me being that mathematics is a huge part of what I want to write online about, especially as I start thinking towards graduate school and getting back into studying mathematics. Looking around for an alternative to Posterous, I found that WordPress offers some very excellent $\LaTeX$ rendering, and so I’m going to be trying this out here for a while. An example nugget of beauty, both in content and typesetting bliss – $e^{i\pi} - 1 = 0$ Ahh… And best yet, it’s as simple as $\pi$ to use as well. The above was created by entering the latex code e^{i\pi} - 1 = 0 inside of “” marks with “latex” appended to the first (I would show you in code what I mean, but it keeps rendering it and I’m not sure how to get it to stop – any suggestions?). Cake huh? Basically the exact same syntax as you would use to enter into equation mode when working with actual $\LaTeX$, only you add “latex” after the first “\$”. Couldn’t be happier with that part.

The only problem I have with this at the moment is that I would rather that the rendering be done with MathJAX, a javascript library which I have just discovered. It has a lot of advantages over whatever rendering system WordPress currently has set up. This plugin gives the reader the ability to view the rendered content in either HTML and CSS (think infinitely re-sizable vectors and perfect contours) or MathML (think standards) and also gives the user the ability to do all sorts of little things, like scale all of the equations, zoom in on one equation, view the source LaTex (or MathML – WordPress shares this feature though). From their website, you can “copy equations from your web pages into Word and LaTeX documents, science blogs, research wikis, calculation software like Maple, Mathematica and more.” It is also compatible with screen readers for people with disabilities. All really great stuff.

My hat goes off to WordPress for making their software so math friendly. However, it could be made better still. I hope that at some point they take up the challenge of doing this. If they don’t I may eventually take the initiative to create a math forums with all of the bells and whistles I want. It’s something I’ve been thinking about doing anyway. Aside from math blogging, I would really like to see more in the way of online mathematics communities taking the exploration of mathematics into a more open sphere. There is currently a beta application called Equalis that is attempting to establish something along these lines, but so far I’m not very pleased at all. It’s extremely clunky and I don’t think that it is really aiming at creating the same kind of community I’m wanting to see more of. They seem to be looking for professional development, I’m looking for freedom of exploration and openness of ideas.

We’ll see, for now, there is a good chance that I’ll be using this for my blogging for a while.

Hope you enjoy.