Just as an info, C# / NET is not a JIT language
For those not knowing the difference (not pointing my finger at anyone here): You have basically 3 type of languages and how the computer understand them. But let's start from the very beginning. Our computers know only one thing: CPU instructions. CPU instructions are binary values which tell the CPU (the brain of the computer) how to do things. For example copy this to there, add this 2 numbers etc. CPU instructions are pretty much limited in number, and are not depending on the OS. Yet they full depends on the CPU architecture (AMD / Intel do share the same, but ARM are different beast for example).
Coding in binary form is not really doable... Too much risks you do mistakes. To solve this, people created "languages" which fit more the needs of the human and can then somehow translated to what a CPU can understand.
One of the most low level language is called Assembler which is a raw translation (at it's basic form) from a couple of character string to a binary representation. It helps, but it is still fully CPU dependent, and not easy to work with.
To avoid such issues, and make life easier to programmers, they invented the so called compiled languages. C is one of those. A compiled language requires that you pass your source code through a tool (a compiler) which transforms the source code in a binary code. Usually you have a second pass with a linker which mix your code with per-built libraries. Compiling allows also to check some more issues, for example if your code does have syntax errors or others. It has a great benefit too, in some case your code can be portable across multiple CPU as well as OS with just the requirement to re-compile. If the compiler is good, as well as your libraries, there should be no overhead over a binary or assembled code. Usually however a compiled code is much bigger than something you write in assembler due to the fact it adds all sort of things you may actually not need for a small piece of code.
Compilation can take a lot of time, and requires tools that you may not always have with you. Therefore something else was needed. Interpreted languages fit this need. Basically instead of compiling your code, the code is "parse" while running it. I will not dig into the multiple ways and optimization you can do there, but think that this usually has some performances hit compared to a compiled language. The real advantage is to make life easier to "hack" into a code, and modify just a bit and try again. However some language are not able to "check" if you wrote something wrong until you reach that specific line, therefore you may also loose some of the per-debuging help you may get from a compilation. Another advantage is that in principle interpreted language are OS and CPU independent as long as you have the interpreter for your CPU / OS.
The last big family of language is so called a VM (Virtual Machine) language or bytecode one. Basically it's a mix between a compilation and an interpretation. Java being a good example of it. You write your code, then you use a compiler which generate something similar to a binary code... yet it's not meant to be run by any CPU. To make it run you need a VM to run it (Java JRE is the VM machine for java bytecode). Again you may have optimization to make things run faster, but let's say, it's nearly an interpretation at this point, where a code "add" will be executed as if it was interpreted by the VM. The advantages here are that you have the strength of a compilation which fully check your code (but do not prevent all bugs of course), and run inside an environment which could make your code run independently of the OS and CPU. Java was advertised as compile once and run anywhere. Sadly it's not fully true.
A JIT is a "Just In Time compiler" which means, when you run your interpreted or bytecode you transform actually this slower version in something which should reach the performances of the true compiled code. Java does have a JIT (as well as modern browsers for JS) however it's called an hotspot JIT which will kick in only when a piece of code is marked as hot (basically when it's run multiple times).
To come back why .NET is not a VM / JIT at least on windows. .NET takes many ideas from Java, yet push them a bit further. On windows, when you start a .NET application which is stored as bytecode on the disk (IL code in Microsoft terminology) it will keep all this bytecode in memory, but instead of interpreting it, it will compile it as soon as it reach one block. So if you have a function, the function will be stored as bytecode, and may stay as bytecode as long as you don't call it. As soon as you call it, it will be compiled. Therefore on windows, you never run bytecodes or IL code. On mono on linux the situation is different.
The end result is that .NET code is basically running at the speed of C++ code on windows. The difference you may see are normally not due to the IL code but the libraries you may use. For example List<YourClass> may not be as fast as linked list in C++

The second main difference is that in .NET you don't release the ram when you want, it's normally handled by a garbage collector which wakes up when there is a memory pressure (lacks of free memory) and starts cleaning up the ram. That can introduce lags in moments you may not want them, where in C++ you need to tell then you don't need anymore an object (source of many memory leaks bugs).
From my own trials, C# runs as fast as C++, if you write well your C# code that's it. Memory usage tends to be higher, due to the size of the objects which are bigger and contains more information as well as the garbage collector doing its job maybe later on.