Computing the semantic similarity between two synsets in WordNet can be easily done with several built-in similarity measures, such as:
synset1.path_similari
In the paper by Kamps et al. (2004), they defined a graph of words as nodes which nodes are connected if two words are synonyms. Then they defined shortest path between two words as their geodesic distance. As I understand, there is no weight on edges, which means you basically can count number of edges when you want to find the shortest path.
The paper:
Kamps, Jaap, et al. "Using WordNet to Measure Semantic Orientations of Adjectives." LREC. Vol. 4. 2004.
But what they really seeking is a measure for semantic orientation. It depends on your application to choose the best measure accordingly. A set of similarity measures which recently achieved a huge attention is based on Distributional Hypothesis. These machine learning methods based on word usages in huge documents create geometric similarity measures (e.g. cosine similarity). But these methods are conceptually disconnected from WordNet distance measures.
However, there are some works around it to use WordNet gloss and definitions in synsets as context samples to learn statistical models of words such as Patwardhan and Pedersen (2006). But in general these model are not suitable for finding sentiment orientations without supervision of positiveness or negativeness.