Skip to main content

Creating an optical computer

Creating an optical computer 

Note on creating an optical computer. 


What is Optical Computer?

  • A laptop is a microchip based computer and uses electricity and transisters to compute.
  • An optical computer uses photons to compute. 

How does it compare to a typical laptop?
  • A modern desktop computer has about 5 TFLOPS (5 x 10^16 floating calculations per second).
  • With an optical computer, there is no limit in the calcuations per second.  


Is an optical computer faster than a quantuam computer? 

  • In 2016, the fastest known quantum computer has 2000 qubits, which is 1000 faster than 512 qubits. 
  • With an optical computer, there is no artificial limitation like 2000 or 500 qubits.  


What's the theoretical compute limit on an optical computer? 

  • There is a limit of speed of light.
  • For now, the only artificial limitation is how we design the first prototype. 


How much electricity energy does it require? 
  • The first POC should use less than 1000 W/hr. 

Has there been any prior inventions or work being done? 
  • There are some interesting works filed in the patent office.
  • They have limitation in a sense that they can't constructed, in a practical sense.
  • Our prototype doesn't have these limitations.

 Can we construct it now?
  • Yes*

What's the plan for the first prototype or POC?
  • Really can't discuss the specifics of the schedule.
  • Given we get the first round of the funding, we may be able to announce the first POC in few years.

Why hasn't anyone else done it?
  • There are a number of challenges in producing an optical computer.
  • We have solved these challenges.
  • In hindsight, everything is 20/20.

How does it work?
  • It uses the properties of a photon to deliever infinite compute capability.
  • There are important specifics about making it possible. 
  • I will revisit this note with illustrations and math.

Will it use the current state of Silicon chip fabrication?
  • First prototype will use chips, but in the future, we will probably have less dependency on chips.
  • It's a new technology.

What are the limitations of the first POC?
  • Like the first POC, it will be limited by the construction of the first design.
  • These limitations will be removed by re-iterating the design of an optical computer.

What can we do with an optical computer?
  • Possibilities are endless.
  • An optical computer can run on one watt (1w) or less to provide infinite compute capability.
  • With it, you can create an iPhone entirely with a transparent glass.  


Can you provide few examples, where the infinite compute capability of an optical computer may be helpful?
  • One example is AGI, the General AI, the holygrail of AI, which will require vast amount of compute capabilty.*
  • With an optical computer, AGI is within the reach for humanity.  Of course, that solves one side the equation.  Another is we still need to solve AGI, in general, using a super computer, first.  
  • AGI can run on one optical computer, because it is a super computer.
  • On a side note, regarding AGI, there is this consensus that we need a super computer to create AGI, but we don't need the vast amount of compute capability, nor the enormous amount of electrical energy to create AGI.  For the additonal details on AGI, I will discuss it on a separate blog, when I get a chance.

Can we build an optical computer that consumes only one watt?
  • Yes, in fact, this is one of our near term goals.

What can we do with one-watt optical computer?
  • It will be one-watt optical computer with vast compute performance.
  • For example, imagine it is embedded in Apple Glass.
  • With our current state of technologies, Apple Glasses is severely limited by its underpower compute capability. 
  • Making it intelligent requires a very heavy compute capability.  
  • Heavy compute capability means Apple Glass will require a big battery. 
  • The problem is we can't require people to have a purse or bag to carry a huge battery tether to Apple Glass.
  • An interim solution is to use iPhone as the brain of Apple Glass for computation.
  • This is done by connecting Apple Glass and iPhone, wirelessly, using Direct Wi-Fi.  
  • But this creates yet another problem.  
  • The refresh rate performance will be horrible.
  • All these virtual objects shown in Apple Glass will seriously lag behind the user's head movements.
  • Users move their heads rapidly.  Apple Glass will have hard time catching up with the user movements to refresh the display, because all the contents are generated on iPhone and push out to Apple Glass, via WiFi-Direct.
  • The wireless network will be congested and lagged.
  • That will create horrizble user experience.
  • We need under 10 ms refresh rate with Apple Glass to make it magical.  
  • Otherwise, it's another AR/VR/XR headset down the pipe.
  • On other hand, with an optical computer, the display refresh will be butter smooth.
  • It will enable the compute hungry wearable devices like Apple Glass.
  • Not only that, but an optical computer will enable a wearable device like Apple Glass to last days on a single charge.
  • Besides Apple Glass like wearable devices, there are numerous use cases for an optical computer.
  • Stay tuned for the second blog on Optical computer, where I will visit on the use cases.
How long have you been working on this?
  • This idea has been with me for some time.
  • But I had faced the same challenges. 

  • The current state of technology is not really ready to support a new technolgy like Optical Computer. 

  • Optimstically, we may need 10 or 20 years to get there.

  • Meanwhile, we have found some ways to overcome these limitations, without waiting for the new advancement of the current technology.

Comments

Popular posts from this blog

How to project a camera plane A to a camera plane B

How to Create a holographic display and camcorder In the last part of the series "How to Create a Holographic Display and Camcorder", I talked about what the interest points, descriptors, and features to find the same object in two photos. In this part of the series, I'll talk about how to extract the depth of the object in two photos by calculating the disparity between the photos. In order to that, we need to construct a triangle mesh between correspondences. To construct a mesh, we will use Delaunnay triagulation.  Delaunnay Triagulation - It minimizes angles of all triangles, while the sigma of triangles is maximized. The reason for the triangulation is to do a piece wise affine transformation for each triangle mapped from a projective plane A to a projective plane B. A projective plane A is of a camera projective view at time t, while a projective plane B is of a camera projective view at time t+1. (or, at t-1.  It really doesn't matter)

How to create a holographic camcorder

Since the invention of a camcorder, we haven't seen much of advancement of a video camcorder. Sure, there are few interesting, new features like capturing video in 360 or taking high resolution 4K content. But the content is still in 2D and we still watch it on a 2D display. Have you seen the movie Minority Report (2002)? There is a scene where Tom Cruise is watching a video recording of his lost son in 3D or holographically. Here is a video clip of this scene. I have been waiting for the technological advancement to do this, but it's not here yet. So I decided to build one myself. In order to build a holographic video camcorder, we need two devices. 1) a video recorder - a recorder which captures the video content in 3D or holographically. 2) a video display - a display device which shows the recorded holographic content in 3D or holographically. Do we have a technology to record a video, holographically. Yes, we can now do it, and I'll e