Mostrar el registro sencillo del ítem
A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs
dc.contributor.author | Tahir, Rabia | |
dc.contributor.author | Cheng, Keyang | |
dc.contributor.author | Memon, Bilal Ahmed | |
dc.contributor.author | Liu, Qing | |
dc.date | 2022-09 | |
dc.date.accessioned | 2022-10-20T11:09:35Z | |
dc.date.available | 2022-10-20T11:09:35Z | |
dc.identifier.issn | 1989-1660 | |
dc.identifier.uri | https://reunir.unir.net/handle/123456789/13680 | |
dc.description.abstract | The applications of style transfer on real time photographs are very trending now. This is used in various applications especially in social networking sites such as SnapChat and beauty cameras. A number of style transfer algorithms have been proposed but they are computationally expensive and generate artifacts in output image. Besides, most of research work only focuses on some traditional painting style transfer on real photographs. However, our work is unique as it considers diverse style domains to be transferred on real photographs by using one model. In this paper, we propose a Diverse Domain Generative Adversarial Network (DD-GAN) which performs fast diverse domain style translation on human face images. Our work is highly efficient and focused on applying different attractive and unique painting styles to human photographs while keeping the content preserved after translation. Moreover, we adopt a new loss function in our model and use PReLU activation function which improves and fastens the training procedure and helps in achieving high accuracy rates. Our loss function helps the proposed model in achieving better reconstructed images. The proposed model also occupies less memory space during training. We use various evaluation parameters to inspect the accuracy of our model. The experimental results demonstrate the effectiveness of our method as compared to state-of-the-art results. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | International Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI) | es_ES |
dc.relation.uri | https://www.ijimai.org/journal/bibcite/reference/3145 | es_ES |
dc.rights | openAccess | es_ES |
dc.subject | generative adversarial network | es_ES |
dc.subject | CycleGAN | es_ES |
dc.subject | Gated GAN | es_ES |
dc.subject | PReLU | es_ES |
dc.subject | smooth L1 loss | es_ES |
dc.subject | style transfer | es_ES |
dc.subject | IJIMAI | es_ES |
dc.title | A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs | es_ES |
dc.type | article | es_ES |
reunir.tag | ~IJIMAI | es_ES |
dc.identifier.doi | https://doi.org/10.9781/ijimai.2022.08.001 |