This chapter doesn't give enough detail on some methods to make it clear what you really have to do, but it does give a good overview, showing why such methods are needed, what they do, and (to an extent) how to choose among them. A more detailed overview and comparison of certain methods is given in Techniques for Requirements Elicitation.
For expert reviews, it is important to clarify what kind of expert you have: domain experts will be useful in a completely different way than interface design experts. For example, expert cognitive walkthroughs may have little value in evaluating how new users will respond, but they can still be useful, e.g. for catching inconsistencies.
Usability labs are not available for most projects. The thinking aloud method has serious problems with tacit knowledge, and also produces unnatural discourse that can be hard to analyze (for details see Techniques for Requirements Elicitation, where this method is called "protocol analysis"). Analyzing videotapes is hard work but can be worth it for some problems if done well. Paper mockups are often a good idea, because they are so easy; PowerPoint (or other) slides are one step up from that, but may actually be less useful. The limitations of usability testing mentioned on p.132 are important.
The advice on surveys is good but too vague. As Shneiderman says, "If precise - as opposed to general - questions are used in surveys, then there is a greater chance that the results will provide useful guidance for taking action" (p.134). Statistics really should be used to analyze the results of surveys, and this requires rather sophisticated knowledge. Acceptance tests are actually part of the contracting process, but user interface professionals may become involved in their design.
Interviews and focus groups are highly recommended (at least by me). Continuous computer-based data logging raises serious privacy issues; moreover, the social information that might have the most impact is probably not available from such a source. Bulletin boards, newsletters, online help, etc. can be important. As Shneiderman says, "Every technical system is also a social system that needs to be encouraged and nurtured" (p.149). Controlled psychological experiments are a lot of trouble and unlikely to be very useful in most cases. Maybe the sentence on p.150, "If you are not measuring, you are not doing human factors!" is left over from a previous edition, as there is little in this book that argues for the value of human factors in this traditional sense, which is at most a small part of what user interface designers do today.