Like miro123, I use notebooks for analysis but scripts for acquisition, since for me acquisitions are often long running and repetitive (like do a series of measurement for a number of parts), so I run them headless, and have them write data to CSV. But for more ad-hoc measurements I could easily see me using something like python-ivi from a notebook. But I still prefer writing all raw data to files for later analysis.
I use notebooks for analysis, but copy-pasting code if you want to do the same analysis multiple times gets old really quickly, so for anything non-trivial that I use more then once, like aggregating data in a particular way, I'll create functions. Initially I define the function in a cell near the top of the notebook, and as soon as I start having multiple notebooks using it, I put it in a Python module that I can import in a notebook. Putting it in a module also makes things more testable and easier to maintain than in notebooks.
For analysis I represent all data in Pandas DataFrames, which allow easy aggregation and basic statistics. Then I use SciPy for more advanced statistics and curve fitting, and Matplotlib for plotting. This is all pretty well integrated, but occasionally a line of numpy code is necessary because something expects a Numpy array of particular dimensions instead of a dataframe. I would say the learning curve is pretty steep.
The notebook I use to analyze data from 'foo' devices foo1 and foo2 might then end up reading something like:
foo1_data = pd.read_csv(...)
foo2_data = pd.read_csv(...)
foo1_data = add_temperature_humidity_pressure(foo1_data)
foo2_data = add_temperature_humidity_pressure(foo1_data)
show_counts(foo1_data)
foo1_fit_results = plot_and_fit_foo_data(foo1_data)
plot_fit_results(foo1_fit_results)
show_counts(foo2_data)
foo2_fit_results = plot_and_fit_foo_data(foo2_data)
plot_fit_results(foo2_fit_results)