In: Computer Science
We have a model with those 2 classes and lets say it is the base_model. We have weights from the trained model. Considering the situation, we need to define a new model because the classification will be from 3 classes(A,B,C) rather than 2. So we need a new NN, old one wont work. In the new NN we will use the weights from the base-model. A NN require some weights for training. One usually use random weights in the beginning that are then altered as the the model is trained. So those trained weights from base-model will be used initially in new model and then the new model will be trained again for the new data of 3 classes. Previous training wont be impacted, the model will further increase its knowledge base. this is the basic essence of transfer learning.
Also, Freeze all the weights in original layers by allowing "new" model to optimize only new weights. That will give you exactly same predictive power for original 2 classes and might give OK performance for new ones.
OR
Train whole network at once (by propagating error of new class), which might be working for new class(es), but you will end up with ineffective original solution for 2 classes (since weights will be changed for the lower classes and final layer won't be updated to match those changes).