Abstract Classes&Interfaces&Final
Abstract Classes&Interfaces&Final
Abstract Classes&Interfaces&Final
An abstract class is a special type of Java class that has the following characteristics:
Abstract classes cannot be instantiated as objects.
Abstract classes are usually used as super classes from which to derive inherited classes
that can be instantiated.
Abstract classes usually contain base implementations of methods that all derived classes
need or want.
I think the best way to understand an abstract class is by way of a real world example.
Consider a Mammal class.
As you might recall from your years in primary school, all mammals share various
methods and properties such as the fact that they are warm-blooded, maintain
homeostasis, and have hair.
However, it would not exactly make sense to instantiate a mammal object from the
mammal class.
Think about it, in the real world there really aren't any plain, vanilla mammals. Rather,
there are "animals that are of the mammal class by inheritance" such as dogs, whales and
people. But straight mammals...there aren't any in nature.
(Rule 2: Abstract classes are used from which to derive inherited classes)
However, Mammal is not simply an organizational tool. It has value in that it does do
work. The Mammal class does define methods and properties.
As a result, inherited classes like dog, whale , and person do not need to do the work.
(Rule 3: Abstract classes usually contain base implementations of methods that all
derived classes need or want.)
So what good are abstract classes to you as a programmer?
Well, they are really a convenience tool that you can use when you develop object
libraries. Abstract classes give you the ability to define a class from which you will derive
a set of classes when you want them all to inherit a set of functionality.
Thus they provide you a way to implement a set of methods and/or properties that all
derived classes will inherit.
However, it is crucial to be clear that the abstract class itself is never used by itself.
See the example given below:
"Imagine a class called Transaction that is to be used in a home banking application. This
class has two subclasses: Deposit and Withdrawal. It might not make sense to create an
instance of Transaction; a customer doesn't generically announce to the bank teller, 'I
want to make a transaction', but rather 'I want to make a deposit' or 'I want to make a
withdrawal'."
I think that Barry could go one step further. Not only does it not make sense to create a
Transaction object, it might even be something that you want to discourage as the
developer of an object library.
By declaring the class as abstract you force other programmers to maintain your sense of
how the hierarchy works.
An interface is like an abstract class in that interfaces are not instantiated into objects.
However, the similarity pretty much stops there.
Interfaces are not primarily concerned with defining a super class from which to derive
subclasses.
Similarly, interfaces do not contain base implementations to be used by subclasses.
Instead, an interface defines a set of "unimplemented" methods that some other object
must implement in order to claim the title "official implementer of the interface"
Interfaces are essentially contracts that specify that any object that claims to implement
the interface agree to implement the set of methods defined by the interface.
Once an implementer has implemented the methods, any other object in the application
can trust that the implementer has in fact implemented the methods defined by the
interface.
Like abstract classes, interfaces are constructs to help programmers keep object libraries
controlled and to enforce standardization on further development by other programmers.
They are there to help you maintain high coding standards.
Final Keyword
The final keyword is often misused -- it is overused when declaring classes and methods,
and underused when declaring instance fields. This month, Java practitioner Brian Goetz
explores some guidelines for the effective use of final.
Like its cousin, the const keyword in C, final means several different things depending on
the context. The final keyword can be applied to classes, methods, or fields. When
applied to a class, it means that the class cannot be sub classed. When applied to a
method, it means that the method cannot be overridden by a subclass. When applied to a
field, it means that the field's value must be assigned exactly once in each constructor and
can never change after that.
Most Java texts properly describe the usage and consequences of using the final keyword,
but offer little in the way of guidance as to when, and how often, to use final. In my
experience, final is vastly overused for classes and methods (generally because
developers mistakenly believe it will enhance performance), and underused where it will
do the most good -- in declaring class instance variables.
Premature optimization
Declaring methods or classes as final in the early stages of a project for performance
reasons is a bad idea for several reasons. First, early stage design is the wrong time to
think about cycle-counting performance optimizations, especially when such decisions
can constrain your design the way using final can. Second, the performance benefit
gained by declaring a method or class as final is usually zero. And declaring complicated,
stateful classes as final discourages object-oriented design and leads to bloated, kitchen-
sink classes because they cannot be easily refactored into smaller, more coherent classes.
Like many myths about Java performance, the erroneous belief that declaring classes or
methods as final results in better performance is widely held but rarely examined. The
argument goes that declaring a method or class as final means that the compiler can inline
method calls more aggressively, because it knows that at run time this is definitely the
version of the method that's going to be called. But this is simply not true. Just because
class X is compiled against final class Y doesn't mean that the same version of class Y
will be loaded at run time. So the compiler cannot inline such cross-class method calls
safely, final or not. Only if a method is private can the compiler inline it freely, and in that
case, the final keyword would be redundant.
On the other hand, the run-time environment and JIT compiler have more information
about what classes are actually loaded, and can make much better optimization decisions
than the compiler can. If the run-time environment knows that no classes are loaded that
extend Y, then it can safely inline calls to methods of Y, regardless of whether Y is final
(as long as it can invalidate such JIT-compiled code if a subclass of Y is later loaded). So
the reality is that while final might be a useful hint to a dumb run-time optimizer that
doesn't perform any global dependency analysis, its use doesn't actually enable very
many compile-time optimizations, and is not needed by a smart JIT to perform run-time
optimizations.
Final fields
Final fields are so different from final classes or methods that it's almost unfair to make
them share the same keyword. A final field is a read-only field, whose value is guaranteed
to be set exactly once at construction time (or at class initialization time for static final
fields.) As discussed earlier, with final classes and methods you should always ask
yourself if you really need to use final. With final fields, you should ask yourself the
opposite question -- does this field really need to be mutable? You might be surprised at
how often the answer is no.
Documentation value
final fields have several benefits. Declaring fields as final has valuable documentation
benefits for developers who want to use or extend your class -- not only does it help
explain how the class works, but it enlists the compiler's help in enforcing your design
decisions. Unlike with final methods, declaring a final field helps the optimizer make
better optimization decisions, because if the compiler knows the field's value will not
change, it can safely cache the value in a register. final fields also provide an extra level
of safety by having the compiler enforce that a field is read-only.
In the extreme case, a class whose fields are all final primitives or final references to
immutable objects, the class becomes immutable itself -- a very convenient situation
indeed. Even if the class is not wholly immutable, making certain portions of its state
immutable can greatly simplify development -- you don't have to synchronize to
guarantee that you are seeing the current value of a final field or to ensure that no one
else is changing that portion of the object's state.
So why are final fields so underused? One reason is because they can be a bit
cumbersome to use correctly, especially for object references whose constructors can
throw exceptions. Because a final field must be initialized exactly once in every
constructor, if the construction of a final object reference may throw an exception, the
compiler may complain that the field might not be initialized. The compiler is generally
smart enough to realize that initialization in each of two exclusive code branches, such as
in an if...else block, constitutes exactly one initialization, but is often less forgiving with
try...catch blocks. For example, most Java compilers won't accept the code in Listing
1:
public Foo() {
try {
thingie = new Thingie();
}
catch (ThingieConstructionException e) {
thingie = Thingie.getDefaultThingie();
}
}
}
public Foo() {
Thingie tempThingie;
try {
tempThingie = new Thingie();
}
catch (ThingieConstructionException e) {
tempThingie = Thingie.getDefaultThingie();
}
thingie = tempThingie;
}
}
Why was final not extended to apply to arrays and referenced objects, similar to the use
of const in C and C++? The semantics and use of const in C++ are quite confusing,
meaning different things depending on where in the expression it appears. The Java
architects were trying to save us from this confusion, but unfortunately they created some
new confusion in the process.