If you do not know the derivative of the function whose root (solving for \$x\$ in \$f(x) = 0\$) or roots you are trying to find, I recommend using the secant method instead. It's close enough to being the same thing, but easier to compute, and well established in the peer-reviewed literature.
(It's like pseudo-random number generators. Some of them are really good, the sequences they produced look very random and pass a big battery of statistical analysis tests; some of them are really bad and should not be used at all. The difference? Often just a tiny tweak to one integer constant.)
Octave is a free alternative to Matlab. It is mostly compatible with Matlab, too.
If you cannot afford a Maple (or Mathematica) license, you can use free
SageMath to calculate those derivatives for you symbolically. I used Maple for the above derivatives myself, but I occasionally use the command-line interface for SageMath instead, for "quick stuff" when I already have a terminal open. Most of the time I spend with those, is trying to find the "easiest" form of the equation to work with; they all have lots of "transfrom/simplify/combine/factor" form functions, but for complex functions, usually it boils down to one form being nicest for one part of the equation, and another for another, so some hand-crafting is usually needed. It took me maybe five minutes to work out the function and derivative form -- but the exponentiation I did immediately, when first seeing the formulae; it's just that useful with equations where one side is a logarithm.
For C, C++, Fortran, and Java, many consider
Numerical Recipes the book for numerical methods. It's not perfect, but it does cover the subject quite well.
Even if you do not write numerical method implementations yourself, knowing the basics -- let's say something like reading this thread and the linked Wikipedia pages, and writing some experimental functions for yourself and seeing how/when they fail --
is demonstrably useful. You do need to fail at first to learn the important stuff, though.
Let me tell you my recent real-world story.
I am not an EE, but I had a few really simple MOSFET switching circuits (steady-state, DC operation; the simplest stuff there is) I was trying to see if/how they would work, before getting the components and putting one of them on a breadboard. So, comparison and getting a basic grasp on things kind of situation. Running a Linux machine, I installed
Qucs (trivial via Synaptic, my GUI package manager), and drew up the simplest version of the circuit, and tried to simulate it. No worky; instead I got error messages, or the simulation didn't seem to progress any.
Of course, I read the documentation and tutorials and examples, and checked what the error messages meant, but it was not at all absolutely clear how it all related
to the exact circuit I was simulating.
However, because I know how easily a numerical solver diverges or misses solutions when the function is too wonky (has lots of roots, almost-roots, infinities, undefined points, or some combination of these), I knew the first step is to simplify the equations. Because I have calculated some basic DC circuits by hand on a basic electronics course, I knew this does not necessarily mean reducing the number of components, but making it simpler to establish the voltage and current at each important point in the circuit. So, I added current-limiting resistors to the voltage source and MOSFET gate, and added a couple of ohms dummy load (as a stand-in for my real-world load I was switching the voltage/current to). Ta-dah, worky-worky now!
At no point did I get too frustrated or felt "stuck", because I had at least a vague idea of what was going wrong, and how to overcome it. If I had thought of the simulation as a black box, which magically produced the outputs without understanding how it does it, having it
not produce those outputs even though my simulation looked reasonable on its face, would have been extremely frustrating.
Having a bit of practical experience, like real-world voltage sources not providing infinite current, nor current sources providing infinitely high voltages, and knowing the difference to simulated "perfect" components, also helped a lot: I knew how to make the simulated circuit more "realistic".
This same thing has happened to me in computational materials physics, too (at the very beginning, "joining" Morse copper blocks too close, as if under millions of atmospheres of pressure at their interface only, causing all sorts of havoc and amusing effects; which deeply intensified the interest in the subject for me), and I've heard from friends doing chemical reactor simulations it has happened to them as well. In all cases, having a basic understanding how the simulation math works, and how the simulation differs from real life cases, has always helped pinpoint the error/issue/misunderstanding before descending to super-frustration. (Although you do usually go through the what-the-fuck-just-happened? phase, which to me at least, is lots of fun.)