Leave things alone.
There's no objective criteria that defines a "10" versus a "9" versus an "8" mountain/sunset/flower/route and so on. The distinctions are too ill-defined. If 50 years and thousands of climbers can't agree on the distinction between class-3 vs. class-4, for example, we're not going to have better success on the voting criteria. Thus, you run the risk of reducing the voting weight of certain people when they don't "use the system correctly", but the system itself is inexact. How exactly would you explain your reasoning to an affected voter? The whole system is hopelessly subjective, and frankly, that's fine. If you can completely and axiomatically* develop an objective voting system, then build a robot to vote on all submissions.
Everyone votes in a system of their own choosing. Some vote 10 or not at all. Some use a smattering of the numbers. The end result is a mish-mash, yet in the broadest terms, the better pages float to the top, the worse ones to the bottom. Thus, there is nothing statistically significant between a page with a score of 82.15% versus one with a score of 82.53%. We may as well judge pages by its percentiles, that is, its relative "quality" against similar pages.
There is also the fact that certain types of mountains draw more views and more votes. The more famous peaks, and those that are more "mountaineery" with glaciers and knife-edges, outdraw the views and the votes of other types of peaks, e.g. desert summits, or forested Appalachian mountains, or other peaks that normally don't get considered when we think of mountaineering.
For example, as of February 25, 2013, Mount Rainier has a 100% rating, 263 votes and over 437,000 hits. Ruby Dome (in Nevada) has a 94.24% rating, just 46 votes and only 22,186 hits. Both are outstanding mountains with challenges unique to each, but is Mount Rainier better than Ruby Dome? What defines "better"? Their respective pages look about equal in quality. Both are well done, but Rainier wins because it's just a more famous mountain than Ruby Dome. More people gravitate to Rainier than to Ruby Dome simply because more people have heard of Rainier.
Now, view the page for Wheeler Peak (Great Basin, Nevada): http://www.summitpost.org/wheeler-peak/150191
. This page as a near 98% rating, but the page itself is very scant and hasn't been updated since 2005, when SP1 was in place. People aren't voting on the page, they're voting on the mountain. A crap page on some beautiful peak in the Alps will outdraw, outvote and outscore a well-constructed page on a desert peak in New Mexico or Morocco any day of the week. That's just reality. That's the unavoidable bias that we all have toward snowy, pointy mountains.
We've already toyed with scaling, weighting and applying logistical curves to voting, and it won't solve whatever problem that we believe to still persist. You're just moving laterally after a certain point. Some might say there really is no problem, that abuses of the system are far outweighed by "intended" uses of the system, or even balanced by other abuses...
What we have now works in its clunky, taped-together way. Thus, I vote to leave things alone. Bear in mind, my vote has a weight of 94.39%, and I am most proud of all the hard extra work I have done to get that extra 0.39. By June, I hope to be somewhere around 94.46.
(* No need to remind me about Godel's incompleteness theorem)