Image and video compression is one of the major components used in video-telephony, video conferencing and multimedia-related applications where digital pixel information can comprise considerably large amounts of data. Management of such data can involve significant overhead in computational complexity and data processing. Compression allows efficient utilization of channel bandwidth and storage size. Typical access speeds for storage mediums are inversely proportional to capacity. This paper aims at implementing a wavelet transform and neural network based model for image compression which combines the advantages of both wavelet transformations and neural networks. Images are decomposed using Haar wavelet filters into a set of sub bands with different resolution corresponding to different frequency bands. Scalar quantization and Huffman coding schemes are used for different sub bands based on their statistical properties. The coefficients in low frequency band are compressed by Differential Pulse Code Modulation (DPCM) and the coefficients in higher frequency bands are compressed using neural network.