To be a bit pedantic, FPGAs are flashed/configured via a boot serial flash in which you program the gate configuration using a hardware description language like vhdl or verilog. The FPGA itself is not a device which is manufactured for a specific application, rather it is like a box of lego which can be put together using it’s components (logic element blocks consisting of look up tables and glue logic) through a flexible mesh network of programmable interconnections.
This is done after manufacturing by an end user just like a microprocessor would be programmed, but instead of telling a fixed deterministic processor architecture what to do (microprocessor), you tell an FPGA what to be. It is even possible to “emulate”/recreate a softcore processor inside an FPGA which is how I’ve seen people use to make very accurate NES on FPGA projects. In university, I took a class where we implemented a motorola 68000 microprocessor inside a xilinx spartan 3E. Very cool to work with but pretty steep learning curve learning the syntax and ide.
I think there was just some muddiness here around the distinction between an FPGA and an ASIC. They are essentially the same thing, apart from the NRE (non-recurring engineering) cost - from https://anysilicon.com/fpga-vs-asic-choose/:
An ASIC is similar in development to a mask rom in that it requires close work with a silicon fab house to design the die to your specifications. An FPGA, however, is closer to a microprocessor in terms of the development to the end user.
An analogy for ASICs is paying a baker to make you a cake that you provide drawings of the design and ingredient lists for, while an FPGA would be like buying a box of cake mix with frosting and sprinkles and you make it yourself. I suppose a microcontroller then would be kinda like buying a premade cake and then doing the icing design on top and cutting it to how many slices you want.
The syntax of most HDLs resemble procedural languages (Verilog and VHDL look a lot like Pascal or BASIC variants), but because you end up describing ‘what to be’ and not ‘what to do’ means that they’re more comparable to ‘declarative programming’ (e.g. SQL, HTML).
There’s debate about whether declarative languages count as ‘programming’ or not,
hence many consider that HTML is not a programming language.
Some people consider SQL to be a ‘query language’ and not a programming language because it just describes fetching data from a database,
and others think it does count as programming because it has basic flow control (i.e. IF and ELSE).
For me, the crux of the difference is that HDLs aren’t general purpose enough to write traditional ‘software’ on, they’re more of a way to describe how data should flow through a piece of hardware - i.e. describing a hardware setup, not a procedure for a processor to perform.
Hence I’d class HDLS as being closer to hardware engineering than programming,
even though they involve using a computer language and even though you can change the circuit’s behaviour based on what you write with the language.
Likewise it’s possible to emulate an FPGA using a processor.
Yes it is VERY expensive to do ASIC because you need to essentially start a factory that manufacture those ICs. But since that, everything else (that is not needed) is cut, therefore saving (a final value per chip) of money
But on the other hand, I bet there are processors that are cheaper (and smaller) than the one @uXe’s design, also capable of doing the work. Maybe another AtMega32U4 is good. Or maybe you need something bigger.
But I am not sure what.
I kind of view it as that.
But now there are games (similar to the FreeGear but a lot crappier) using solely HTML5 and therefore can be one-of-a-kind a pleasure for kids that cannot download games on their computers (or iPads, due to restrictions of administrator)
@pharap do you think it is possible to make a FPGA with a Arduino?
That FPGA chip can get power from the power pins for the display, and I think it just need to read the info a screen need to read and send feedback (if required), and send all these over VGA.
I can gut a VGA cable, at least.
My school had their own FPGA-based “toy” for the student projects. It’s called FITKit and it’s a board with a few buttons, small LCD display and various interfaces that can be programmed to do various things (games, control robots, measure things, …). I didn’t get involved with it too much, but it’s open-source/open-core and well tested by many students by now. So if you want to design something FPGA-based, this may be one inspiration to take a look at:
I still have my digilent basys 2 that I bought at heavy discount through a course I took in undergrad. I fully intended to make a gameboy to vga adapter and got up to the point where I could generate the vga from dual port ram but it got shelved and I haven’t touched it since I graduated.
With the MKR VIDOR 4000 you can configure it the way you want; you can essentially create your own controller board. It comes loaded with hardware and potential: an 8 MB SRAM; a 2 MB QSPI Flash chip — 1 MB allocated for user applications; a Micro HDMI connector; an MIPI camera connector; and Wifi & BLE powered by U-BLOX NINA W10 Series. It also includes the classic MKR interface on which all pins are driven both by SAMD21 and FPGA. Plus, it has a Mini PCI Express connector with up to 25 user programmable pins.
The FPGA contains 16K Logic Elements, 504 KB of embedded RAM, and 56 18x18 bit HW multipliers for high-speed DSP. Each pin can toggle at over 150 MHz and can be configured for functions such as UARTs, (Q)SPI, high resolution/high frequency PWM, quadrature encoder, I2C, I2S, Sigma Delta DAC, etc.
The on-board FPGA can be also used for high-speed DSP operations for audio and video processing. This board also features a Microchip SAMD21. Communication between FPGA and the SAMD21 is seamless.
University. It’s really super useful for the projects in hardware courses and students use it heavily for their theses. I remember in one project we implemented a simple CPU in VHDL on it (not from scratch, just filled in some functions). Also one project included a Brainf*ck interpreter (in Catapult C I think). I’m not sure it would be this much beneficial in secondary education, so no one would sponsor it.
Sparked my own curiosity with this thought - and after some digging, and some fiddling, I now have a working minimal example for a RISC-V (RV32) core that runs on an iCE40 FPGA and will accept uploads over serial of ‘.ino.hex’ files written and compiled in the Arduino IDE using FPGArduino’s boards package!
Thanks for the tips - can you explain why? More efficient? or just better etiquette?
I did have a feeling the code in the sketch would come under some scrutiny didn’t actually write it myself though, just pulled it from this surprisingly lengthy ‘issue’ thread that was raised by somebody else asking for iCE40 support, but contained the clues I needed to get it working myself:
Mostly better ettiquette, but a few other benefits too.
Using a variable instead of a macro will give you better error messages because instead of doing a dumb copy-and-paste procedure to replace the macro with the macro definition, the variable is actually part of the C++ language so the compiler knows about it and understands it.
In fact, these days macros are recommended against in 95% of use cases because C++ has language features that can do the job a lot better.
Ideally C++ code should contain as few macros as possible (preferably none).
As for constexpr:
A constexpr variable doesn’t occupy an address in memory like a regular variable
A constexpr variable can be used in contexts that a normal variable cannot, e.g. as the size of an array, char array[constexprVariable];
A constexpr variable cannot have its constness removed through const_cast
I thought it was probably something someone only threw together for quickness,
but if I don’t keep pointing out what’s good and what’s bad then people learning to program won’t know the difference and they’ll keep perpetuating bad habits.
The more people there are using good habits,
the more the good habits get mimicked and perpetuated.
In that case, I wonder why it’s LED = 0xFF;,
that only sets the least significant byte.
Maybe it’s actually supposed to be std::uint8_t instead of std::uint32_t?
The RISC-V core being used is 32-bit, so I guess that’s why they’re just defaulting to uint32_t?
The reason for only setting the LSB, is that those 8-bits correspond to 8 physical LEDs - the way the ‘virtual GPIOs’ work in this example is by setting the bits in memory from the Arduino side, and then redirecting those values in memory to physical outputs on the FPGA side, here:
The board I used only has 4 LEDs built-in, so that’s why I changed it from setting a byte (0xFF) to only setting a nibble instead (0x0F).